Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stabilize TestFunctional/parallel/DockerEnv integration test #10492

Closed
ilya-zuyev opened this issue Feb 17, 2021 · 2 comments
Closed

Stabilize TestFunctional/parallel/DockerEnv integration test #10492

ilya-zuyev opened this issue Feb 17, 2021 · 2 comments
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@ilya-zuyev
Copy link
Contributor

TestFunctional/parallel/DockerEnv flakes when run in GitHub actions.
Example of failed run:

2021-02-17T00:01:22.7911009Z     helpers_test.go:240: <<< TestFunctional/parallel/DockerEnv FAILED: start of post-mortem logs <<<
2021-02-17T00:01:22.7912978Z     helpers_test.go:241: ======>  post-mortem[TestFunctional/parallel/DockerEnv]: minikube logs <======
2021-02-17T00:01:22.7915014Z     helpers_test.go:243: (dbg) Run:  ./minikube-linux-arm64 -p functional-20210216235525-2779755 logs -n 25
2021-02-17T00:01:23.4137669Z === CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService
2021-02-17T00:01:23.4140067Z     helpers_test.go:335: "nginx-svc" [e262f289-58b0-4c41-aad0-b1f27b215a87] Running
2021-02-17T00:01:25.7045729Z === CONT  TestFunctional/parallel/DockerEnv
2021-02-17T00:01:25.7048581Z     helpers_test.go:243: (dbg) Done: ./minikube-linux-arm64 -p functional-20210216235525-2779755 logs -n 25: (2.912515242s)
2021-02-17T00:01:25.7127524Z     helpers_test.go:248: TestFunctional/parallel/DockerEnv logs: 
2021-02-17T00:01:25.7129836Z         -- stdout --
2021-02-17T00:01:25.7130708Z         	* ==> Docker <==
2021-02-17T00:01:25.7131897Z         	* -- Logs begin at Tue 2021-02-16 23:57:11 UTC, end at Wed 2021-02-17 00:01:23 UTC. --
2021-02-17T00:01:25.7133743Z         	* Feb 16 23:58:20 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:20.769589195Z" level=error msg="stream copy error: reading from a closed fifo"
2021-02-17T00:01:25.7137288Z         	* Feb 16 23:58:20 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:20.944284964Z" level=error msg="82f970ae90ca4670a6bb734aee75fec4db961a63fea4557488a658b950d32d9a cleanup: failed to delete container from containerd: no such container"
2021-02-17T00:01:25.7142581Z         	* Feb 16 23:58:20 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:20.945330107Z" level=error msg="Handler for POST /v1.40/containers/82f970ae90ca4670a6bb734aee75fec4db961a63fea4557488a658b950d32d9a/start returned error: OCI runtime create failed: container_linux.go:370: starting container process caused: process_linux.go:459: container init caused: process_linux.go:448: writing syncT 'resume' caused: write init-p: broken pipe: unknown"
2021-02-17T00:01:25.7147074Z         	* Feb 16 23:58:21 functional-20210216235525-2779755 dockerd[411]: time="2021-02-16T23:58:21.103453944Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
2021-02-17T00:01:25.7155725Z         	* Feb 17 00:00:57 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:57.998953278Z" level=info msg="ignoring event" container=8fd1325a18ee143988be3547727d8bb1983f6642dca67c24eaa2e156fbdcedf8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7160823Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.007947254Z" level=info msg="ignoring event" container=61b9482f4323073909d3a860ac936509130f852404978d404900ec38e54e0200 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7164620Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.015376624Z" level=info msg="ignoring event" container=6136c721d0d7941a59afa82cc01c7280ca7b2d7261f750f68192a95b65f4844a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7169285Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.057495247Z" level=info msg="ignoring event" container=0f7cb48a86e1bdc0327f62552c3aabf601652416c933f2b744808cbd149eb4bd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7175111Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.065156312Z" level=info msg="ignoring event" container=4bf1331ef083bce8b8a4165534423ed97620ceb9a4843cf81a4f7085c2a22ef6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7180025Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.099294398Z" level=info msg="ignoring event" container=cc20d60fcb71c66921736e5e823362b244abe438582875b5aa83ff0b4cb7ad11 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7184353Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.110503491Z" level=info msg="ignoring event" container=5125090049bcd369f289c201a30a074c0cc5d55cc354b91a8f6cd5f2adff9e99 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7188513Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.135775116Z" level=info msg="ignoring event" container=adda7f4f538315d4d60bb5b953421c1b20eaf7a100fe60f28de1dcab458915d6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7192663Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.174797694Z" level=info msg="ignoring event" container=76a5892bf39bb79234bdec9a159fbf32376330a142812a83e967896484ec4b56 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7196657Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.193196620Z" level=info msg="ignoring event" container=42999b8184eca1699b899b5d048429eef0cb313b9a0bce3f9d103641b909aab1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7217822Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.193237973Z" level=info msg="ignoring event" container=65452e92862d23b63a6bca266a66dfdd3feb1d39dbe07ceafc55c2c945fa25ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7224584Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.193264828Z" level=info msg="ignoring event" container=f299474e9f3cffff18b490e6578c55864bf4a171d0d0a402e40f1ffd1c4bfbb9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7229128Z         	* Feb 17 00:00:58 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:00:58.398598093Z" level=info msg="ignoring event" container=f2e3cd415a888cf60100fbf5fb58a54a47731dddeb32463a3d8e1aa8ac3a8d09 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7234766Z         	* Feb 17 00:01:02 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:02.992413832Z" level=info msg="ignoring event" container=c6da32b7234d60e5a536b37b5d58cc2ee7f094c01a560cf7b71ad239de05a89e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7239798Z         	* Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:05.449034180Z" level=error msg="Handler for GET /v1.40/containers/aa25c43bff27d671e4dd7215cb95bd9abe1a7f4227ad5c564af3797019a42c70/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
2021-02-17T00:01:25.7246258Z         	* Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:05.450072234Z" level=error msg="Handler for GET /v1.40/containers/aa25c43bff27d671e4dd7215cb95bd9abe1a7f4227ad5c564af3797019a42c70/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
2021-02-17T00:01:25.7252441Z         	* Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
2021-02-17T00:01:25.7256664Z         	* Feb 17 00:01:05 functional-20210216235525-2779755 dockerd[411]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
2021-02-17T00:01:25.7260462Z         	* Feb 17 00:01:15 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:15.207118651Z" level=info msg="ignoring event" container=c48c263e44c9c2f75bc3f7c5a42c1ff3b9db3bbe83f3a81c18c1553d091d6d80 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7264918Z         	* Feb 17 00:01:15 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:15.269276332Z" level=info msg="ignoring event" container=f61b22da999cb0b63e1389394cad98ba5abdc954f772c957f6a5c3f0458c294e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
2021-02-17T00:01:25.7268345Z         	* Feb 17 00:01:15 functional-20210216235525-2779755 dockerd[411]: time="2021-02-17T00:01:15.855757804Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
2021-02-17T00:01:25.7269703Z         	* 
2021-02-17T00:01:25.7270226Z         	* ==> container status <==
2021-02-17T00:01:25.7271062Z         	* CONTAINER           IMAGE                                                                           CREATED             STATE               NAME                      ATTEMPT             POD ID
2021-02-17T00:01:25.7274144Z         	* f59bd12c23289       nginx@sha256:c2ce58e024275728b00a554ac25628af25c54782865b3487b11c21cafb7fabda   5 seconds ago       Running             nginx                     0                   8472496ad852e
2021-02-17T00:01:25.7276688Z         	* a07ef8bb4a5d8       95d99817fc335                                                                   8 seconds ago       Running             kube-apiserver            0                   b501d6f1e4173
2021-02-17T00:01:25.7279449Z         	* 79e4f0230c9d5       db91994f4ee8f                                                                   8 seconds ago       Running             coredns                   1                   8416187a50e92
2021-02-17T00:01:25.7302430Z         	* 60cc59b481124       788e63d07298d                                                                   24 seconds ago      Running             kube-proxy                1                   febddf7be60d8
2021-02-17T00:01:25.7305102Z         	* aa25c43bff27d       84bee7cc4870e                                                                   24 seconds ago      Running             storage-provisioner       1                   ce148f582a8ed
2021-02-17T00:01:25.7307307Z         	* 9f35eeb44c8f7       60d957e44ec8a                                                                   25 seconds ago      Running             kube-scheduler            1                   e03ded6bf9e51
2021-02-17T00:01:25.7309672Z         	* 71db52d9a3e8f       3a1a2b528610a                                                                   25 seconds ago      Running             kube-controller-manager   1                   db4a886c25f5b
2021-02-17T00:01:25.7311714Z         	* f61b22da999cb       95d99817fc335                                                                   25 seconds ago      Exited              kube-apiserver            1                   c48c263e44c9c
2021-02-17T00:01:25.7314122Z         	* 8f607bf42a9f1       05b738aa1bc63                                                                   25 seconds ago      Running             etcd                      1                   f0319c08752b2
2021-02-17T00:01:25.7315925Z         	* 65452e92862d2       84bee7cc4870e                                                                   2 minutes ago       Exited              storage-provisioner       0                   42999b8184eca
2021-02-17T00:01:25.7317392Z         	* c6da32b7234d6       db91994f4ee8f                                                                   3 minutes ago       Exited              coredns                   0                   76a5892bf39bb
2021-02-17T00:01:25.7319364Z         	* 5125090049bcd       788e63d07298d                                                                   3 minutes ago       Exited              kube-proxy                0                   0f7cb48a86e1b
2021-02-17T00:01:25.7321227Z         	* f299474e9f3cf       60d957e44ec8a                                                                   3 minutes ago       Exited              kube-scheduler            0                   6136c721d0d79
2021-02-17T00:01:25.7323886Z         	* 4bf1331ef083b       3a1a2b528610a                                                                   3 minutes ago       Exited              kube-controller-manager   0                   61b9482f43230
2021-02-17T00:01:25.7325479Z         	* 8fd1325a18ee1       05b738aa1bc63                                                                   3 minutes ago       Exited              etcd                      0                   cc20d60fcb71c
2021-02-17T00:01:25.7326394Z         	* 
2021-02-17T00:01:25.7327387Z         	* ==> coredns [79e4f0230c9d] <==
2021-02-17T00:01:25.7328034Z         	* .:53
2021-02-17T00:01:25.7329015Z         	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
2021-02-17T00:01:25.7330417Z         	* CoreDNS-1.7.0
2021-02-17T00:01:25.7331152Z         	* linux/arm64, go1.14.4, f59c03d
2021-02-17T00:01:25.7332015Z         	* [INFO] plugin/ready: Still waiting on: "kubernetes"
2021-02-17T00:01:25.7333190Z         	* 
2021-02-17T00:01:25.7333766Z         	* ==> coredns [c6da32b7234d] <==
2021-02-17T00:01:25.7336756Z         	* E0217 00:00:57.837669       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=243&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.7340796Z         	* E0217 00:00:57.837865       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?allowWatchBookmarks=true&resourceVersion=580&timeout=8m26s&timeoutSeconds=506&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.7344504Z         	* E0217 00:00:57.837884       1 reflector.go:382] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=201&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.7346573Z         	* .:53
2021-02-17T00:01:25.7347430Z         	* [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
2021-02-17T00:01:25.7348771Z         	* CoreDNS-1.7.0
2021-02-17T00:01:25.7349380Z         	* linux/arm64, go1.14.4, f59c03d
2021-02-17T00:01:25.7350146Z         	* [INFO] SIGTERM: Shutting down servers then terminating
2021-02-17T00:01:25.7357211Z         	* [INFO] plugin/health: Going into lameduck mode for 5s
2021-02-17T00:01:25.7357865Z         	* 
2021-02-17T00:01:25.7358391Z         	* ==> describe nodes <==
2021-02-17T00:01:25.7359539Z         	* Name:               functional-20210216235525-2779755
2021-02-17T00:01:25.7361176Z         	* Roles:              control-plane,master
2021-02-17T00:01:25.7362058Z         	* Labels:             beta.kubernetes.io/arch=arm64
2021-02-17T00:01:25.7362912Z         	*                     beta.kubernetes.io/os=linux
2021-02-17T00:01:25.7363837Z         	*                     kubernetes.io/arch=arm64
2021-02-17T00:01:25.7365065Z         	*                     kubernetes.io/hostname=functional-20210216235525-2779755
2021-02-17T00:01:25.7366056Z         	*                     kubernetes.io/os=linux
2021-02-17T00:01:25.7367032Z         	*                     minikube.k8s.io/commit=3bdb549339cf69353b01a489c6dbe349d7066bcf
2021-02-17T00:01:25.7368468Z         	*                     minikube.k8s.io/name=functional-20210216235525-2779755
2021-02-17T00:01:25.7369498Z         	*                     minikube.k8s.io/updated_at=2021_02_16T23_58_02_0700
2021-02-17T00:01:25.7370515Z         	*                     minikube.k8s.io/version=v1.17.1
2021-02-17T00:01:25.7371725Z         	*                     node-role.kubernetes.io/control-plane=
2021-02-17T00:01:25.7372950Z         	*                     node-role.kubernetes.io/master=
2021-02-17T00:01:25.7374401Z         	* Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
2021-02-17T00:01:25.7375667Z         	*                     node.alpha.kubernetes.io/ttl: 0
2021-02-17T00:01:25.7377172Z         	*                     volumes.kubernetes.io/controller-managed-attach-detach: true
2021-02-17T00:01:25.7378443Z         	* CreationTimestamp:  Tue, 16 Feb 2021 23:57:59 +0000
2021-02-17T00:01:25.7379123Z         	* Taints:             <none>
2021-02-17T00:01:25.7379720Z         	* Unschedulable:      false
2021-02-17T00:01:25.7380293Z         	* Lease:
2021-02-17T00:01:25.7381276Z         	*   HolderIdentity:  functional-20210216235525-2779755
2021-02-17T00:01:25.7382189Z         	*   AcquireTime:     <unset>
2021-02-17T00:01:25.7382846Z         	*   RenewTime:       Wed, 17 Feb 2021 00:01:22 +0000
2021-02-17T00:01:25.7383454Z         	* Conditions:
2021-02-17T00:01:25.7384418Z         	*   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
2021-02-17T00:01:25.7385801Z         	*   ----             ------  -----------------                 ------------------                ------                       -------
2021-02-17T00:01:25.7387207Z         	*   MemoryPressure   False   Wed, 17 Feb 2021 00:01:14 +0000   Tue, 16 Feb 2021 23:57:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
2021-02-17T00:01:25.7388988Z         	*   DiskPressure     False   Wed, 17 Feb 2021 00:01:14 +0000   Tue, 16 Feb 2021 23:57:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
2021-02-17T00:01:25.7391386Z         	*   PIDPressure      False   Wed, 17 Feb 2021 00:01:14 +0000   Tue, 16 Feb 2021 23:57:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
2021-02-17T00:01:25.7393003Z         	*   Ready            True    Wed, 17 Feb 2021 00:01:14 +0000   Wed, 17 Feb 2021 00:01:14 +0000   KubeletReady                 kubelet is posting ready status
2021-02-17T00:01:25.7393880Z         	* Addresses:
2021-02-17T00:01:25.7394552Z         	*   InternalIP:  192.168.82.108
2021-02-17T00:01:25.7395608Z         	*   Hostname:    functional-20210216235525-2779755
2021-02-17T00:01:25.7396388Z         	* Capacity:
2021-02-17T00:01:25.7396890Z         	*   cpu:                2
2021-02-17T00:01:25.7397931Z         	*   ephemeral-storage:  40474572Ki
2021-02-17T00:01:25.7398790Z         	*   hugepages-1Gi:      0
2021-02-17T00:01:25.7399565Z         	*   hugepages-2Mi:      0
2021-02-17T00:01:25.7400634Z         	*   hugepages-32Mi:     0
2021-02-17T00:01:25.7401430Z         	*   hugepages-64Ki:     0
2021-02-17T00:01:25.7402117Z         	*   memory:             8038232Ki
2021-02-17T00:01:25.7402643Z         	*   pods:               110
2021-02-17T00:01:25.7403315Z         	* Allocatable:
2021-02-17T00:01:25.7403837Z         	*   cpu:                2
2021-02-17T00:01:25.7404668Z         	*   ephemeral-storage:  40474572Ki
2021-02-17T00:01:25.7405521Z         	*   hugepages-1Gi:      0
2021-02-17T00:01:25.7406290Z         	*   hugepages-2Mi:      0
2021-02-17T00:01:25.7407084Z         	*   hugepages-32Mi:     0
2021-02-17T00:01:25.7407867Z         	*   hugepages-64Ki:     0
2021-02-17T00:01:25.7408463Z         	*   memory:             8038232Ki
2021-02-17T00:01:25.7408978Z         	*   pods:               110
2021-02-17T00:01:25.7409493Z         	* System Info:
2021-02-17T00:01:25.7410124Z         	*   Machine ID:                 46f6444822754a889e4650f359992409
2021-02-17T00:01:25.7411089Z         	*   System UUID:                50408af4-47b4-4574-ab83-34615404919a
2021-02-17T00:01:25.7412556Z         	*   Boot ID:                    b0b00e66-2c54-4a1e-86bd-8109c5527bb8
2021-02-17T00:01:25.7413773Z         	*   Kernel Version:             5.4.0-1029-aws
2021-02-17T00:01:25.7414457Z         	*   OS Image:                   Ubuntu 20.04.1 LTS
2021-02-17T00:01:25.7415104Z         	*   Operating System:           linux
2021-02-17T00:01:25.7415768Z         	*   Architecture:               arm64
2021-02-17T00:01:25.7416552Z         	*   Container Runtime Version:  docker://20.10.2
2021-02-17T00:01:25.7417329Z         	*   Kubelet Version:            v1.20.2
2021-02-17T00:01:25.7422174Z         	*   Kube-Proxy Version:         v1.20.2
2021-02-17T00:01:25.7422887Z         	* PodCIDR:                      10.244.0.0/24
2021-02-17T00:01:25.7424961Z         	* PodCIDRs:                     10.244.0.0/24
2021-02-17T00:01:25.7426007Z         	* Non-terminated Pods:          (8 in total)
2021-02-17T00:01:25.7427015Z         	*   Namespace                   Name                                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
2021-02-17T00:01:25.7428818Z         	*   ---------                   ----                                                         ------------  ----------  ---------------  -------------  ---
2021-02-17T00:01:25.7430054Z         	*   default                     nginx-svc                                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
2021-02-17T00:01:25.7432070Z         	*   kube-system                 coredns-74ff55c5b-9jwcl                                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     3m6s
2021-02-17T00:01:25.7433776Z         	*   kube-system                 etcd-functional-20210216235525-2779755                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         3m18s
2021-02-17T00:01:25.7436038Z         	*   kube-system                 kube-apiserver-functional-20210216235525-2779755             250m (12%)    0 (0%)      0 (0%)           0 (0%)         1s
2021-02-17T00:01:25.7438714Z         	*   kube-system                 kube-controller-manager-functional-20210216235525-2779755    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m18s
2021-02-17T00:01:25.7441004Z         	*   kube-system                 kube-proxy-lvfk2                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m6s
2021-02-17T00:01:25.7442826Z         	*   kube-system                 kube-scheduler-functional-20210216235525-2779755             100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m18s
2021-02-17T00:01:25.7444679Z         	*   kube-system                 storage-provisioner                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m1s
2021-02-17T00:01:25.7445772Z         	* Allocated resources:
2021-02-17T00:01:25.7446773Z         	*   (Total limits may be over 100 percent, i.e., overcommitted.)
2021-02-17T00:01:25.7447817Z         	*   Resource           Requests    Limits
2021-02-17T00:01:25.7448680Z         	*   --------           --------    ------
2021-02-17T00:01:25.7449248Z         	*   cpu                750m (37%)  0 (0%)
2021-02-17T00:01:25.7449794Z         	*   memory             170Mi (2%)  170Mi (2%)
2021-02-17T00:01:25.7450633Z         	*   ephemeral-storage  100Mi (0%)  0 (0%)
2021-02-17T00:01:25.7451508Z         	*   hugepages-1Gi      0 (0%)      0 (0%)
2021-02-17T00:01:25.7452334Z         	*   hugepages-2Mi      0 (0%)      0 (0%)
2021-02-17T00:01:25.7453331Z         	*   hugepages-32Mi     0 (0%)      0 (0%)
2021-02-17T00:01:25.7454167Z         	*   hugepages-64Ki     0 (0%)      0 (0%)
2021-02-17T00:01:25.7454751Z         	* Events:
2021-02-17T00:01:25.7455385Z         	*   Type    Reason                   Age                    From        Message
2021-02-17T00:01:25.7456283Z         	*   ----    ------                   ----                   ----        -------
2021-02-17T00:01:25.7458703Z         	*   Normal  NodeHasSufficientMemory  3m34s (x4 over 3m34s)  kubelet     Node functional-20210216235525-2779755 status is now: NodeHasSufficientMemory
2021-02-17T00:01:25.7461425Z         	*   Normal  NodeHasNoDiskPressure    3m34s (x5 over 3m34s)  kubelet     Node functional-20210216235525-2779755 status is now: NodeHasNoDiskPressure
2021-02-17T00:01:25.7463881Z         	*   Normal  NodeHasSufficientPID     3m34s (x4 over 3m34s)  kubelet     Node functional-20210216235525-2779755 status is now: NodeHasSufficientPID
2021-02-17T00:01:25.7465381Z         	*   Normal  Starting                 3m18s                  kubelet     Starting kubelet.
2021-02-17T00:01:25.7467180Z         	*   Normal  NodeHasSufficientMemory  3m18s                  kubelet     Node functional-20210216235525-2779755 status is now: NodeHasSufficientMemory
2021-02-17T00:01:25.7470701Z         	*   Normal  NodeHasNoDiskPressure    3m18s                  kubelet     Node functional-20210216235525-2779755 status is now: NodeHasNoDiskPressure
2021-02-17T00:01:25.7473635Z         	*   Normal  NodeHasSufficientPID     3m18s                  kubelet     Node functional-20210216235525-2779755 status is now: NodeHasSufficientPID
2021-02-17T00:01:25.7476442Z         	*   Normal  NodeNotReady             3m18s                  kubelet     Node functional-20210216235525-2779755 status is now: NodeNotReady
2021-02-17T00:01:25.7479244Z         	*   Normal  NodeAllocatableEnforced  3m18s                  kubelet     Updated Node Allocatable limit across pods
2021-02-17T00:01:25.7481460Z         	*   Normal  NodeReady                3m8s                   kubelet     Node functional-20210216235525-2779755 status is now: NodeReady
2021-02-17T00:01:25.7484473Z         	*   Normal  Starting                 3m4s                   kube-proxy  Starting kube-proxy.
2021-02-17T00:01:25.7486650Z         	*   Normal  Starting                 15s                    kube-proxy  Starting kube-proxy.
2021-02-17T00:01:25.7489122Z         	*   Normal  Starting                 12s                    kubelet     Starting kubelet.
2021-02-17T00:01:25.7491506Z         	*   Normal  NodeHasSufficientMemory  11s                    kubelet     Node functional-20210216235525-2779755 status is now: NodeHasSufficientMemory
2021-02-17T00:01:25.7494691Z         	*   Normal  NodeHasNoDiskPressure    11s                    kubelet     Node functional-20210216235525-2779755 status is now: NodeHasNoDiskPressure
2021-02-17T00:01:25.7497382Z         	*   Normal  NodeHasSufficientPID     11s                    kubelet     Node functional-20210216235525-2779755 status is now: NodeHasSufficientPID
2021-02-17T00:01:25.7499463Z         	*   Normal  NodeNotReady             11s                    kubelet     Node functional-20210216235525-2779755 status is now: NodeNotReady
2021-02-17T00:01:25.7501329Z         	*   Normal  NodeAllocatableEnforced  11s                    kubelet     Updated Node Allocatable limit across pods
2021-02-17T00:01:25.7503230Z         	*   Normal  NodeReady                10s                    kubelet     Node functional-20210216235525-2779755 status is now: NodeReady
2021-02-17T00:01:25.7504206Z         	* 
2021-02-17T00:01:25.7504664Z         	* ==> dmesg <==
2021-02-17T00:01:25.7505447Z         	* [  +0.000862] FS-Cache: O-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7506428Z         	* [  +0.000668] FS-Cache: N-cookie c=000000002b1f8ab3 [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7507440Z         	* [  +0.001050] FS-Cache: N-cookie d=00000000866407ee n=000000005e953fae
2021-02-17T00:01:25.7508334Z         	* [  +0.000918] FS-Cache: N-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7509257Z         	* [  +0.013502] FS-Cache: Duplicate cookie detected
2021-02-17T00:01:25.7510297Z         	* [  +0.000689] FS-Cache: O-cookie c=00000000b1a9545c [p=000000008bc3ac66 fl=226 nc=0 na=1]
2021-02-17T00:01:25.7511311Z         	* [  +0.001078] FS-Cache: O-cookie d=00000000866407ee n=00000000cc8b7d72
2021-02-17T00:01:25.7512210Z         	* [  +0.000937] FS-Cache: O-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7513190Z         	* [  +0.000674] FS-Cache: N-cookie c=000000002b1f8ab3 [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7514304Z         	* [  +0.001054] FS-Cache: N-cookie d=00000000866407ee n=000000001a6a5283
2021-02-17T00:01:25.7515254Z         	* [  +0.000854] FS-Cache: N-key=[8] 'd51c040000000000'
2021-02-17T00:01:25.7516179Z         	* [  +1.733025] FS-Cache: Duplicate cookie detected
2021-02-17T00:01:25.7517215Z         	* [  +0.000664] FS-Cache: O-cookie c=00000000524c02db [p=000000008bc3ac66 fl=226 nc=0 na=1]
2021-02-17T00:01:25.7518215Z         	* [  +0.001123] FS-Cache: O-cookie d=00000000866407ee n=000000000f2cbff9
2021-02-17T00:01:25.7519104Z         	* [  +0.000853] FS-Cache: O-key=[8] 'd41c040000000000'
2021-02-17T00:01:25.7520198Z         	* [  +0.000669] FS-Cache: N-cookie c=00000000dc53534f [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7521252Z         	* [  +0.001112] FS-Cache: N-cookie d=00000000866407ee n=0000000012ba97ce
2021-02-17T00:01:25.7522153Z         	* [  +0.000856] FS-Cache: N-key=[8] 'd41c040000000000'
2021-02-17T00:01:25.7523086Z         	* [  +0.346794] FS-Cache: Duplicate cookie detected
2021-02-17T00:01:25.7525413Z         	* [  +0.000654] FS-Cache: O-cookie c=000000002f236a72 [p=000000008bc3ac66 fl=226 nc=0 na=1]
2021-02-17T00:01:25.7526628Z         	* [  +0.001105] FS-Cache: O-cookie d=00000000866407ee n=000000005ebbc510
2021-02-17T00:01:25.7527582Z         	* [  +0.000843] FS-Cache: O-key=[8] 'd71c040000000000'
2021-02-17T00:01:25.7528541Z         	* [  +0.000636] FS-Cache: N-cookie c=00000000d47b852c [p=000000008bc3ac66 fl=2 nc=0 na=1]
2021-02-17T00:01:25.7529542Z         	* [  +0.001088] FS-Cache: N-cookie d=00000000866407ee n=00000000a03ebc34
2021-02-17T00:01:25.7530447Z         	* [  +0.000888] FS-Cache: N-key=[8] 'd71c040000000000'
2021-02-17T00:01:25.7531006Z         	* 
2021-02-17T00:01:25.7531497Z         	* ==> etcd [8f607bf42a9f] <==
2021-02-17T00:01:25.7532319Z         	* 2021-02-17 00:00:59.424921 I | embed: initial cluster = 
2021-02-17T00:01:25.7533605Z         	* 2021-02-17 00:00:59.463680 I | etcdserver: restarting member 8bf199ee24c8c3e2 in cluster f398ff6fd447e89b at commit index 641
2021-02-17T00:01:25.7534821Z         	* raft2021/02/17 00:00:59 INFO: 8bf199ee24c8c3e2 switched to configuration voters=()
2021-02-17T00:01:25.7535811Z         	* raft2021/02/17 00:00:59 INFO: 8bf199ee24c8c3e2 became follower at term 2
2021-02-17T00:01:25.7537171Z         	* raft2021/02/17 00:00:59 INFO: newRaft 8bf199ee24c8c3e2 [peers: [], term: 2, commit: 641, applied: 0, lastindex: 641, lastterm: 2]
2021-02-17T00:01:25.7538580Z         	* 2021-02-17 00:00:59.502906 W | auth: simple token is not cryptographically signed
2021-02-17T00:01:25.7539912Z         	* 2021-02-17 00:00:59.527298 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided]
2021-02-17T00:01:25.7541254Z         	* raft2021/02/17 00:00:59 INFO: 8bf199ee24c8c3e2 switched to configuration voters=(10084010288757654498)
2021-02-17T00:01:25.7543183Z         	* 2021-02-17 00:00:59.532654 I | etcdserver/membership: added member 8bf199ee24c8c3e2 [https://192.168.82.108:2380] to cluster f398ff6fd447e89b
2021-02-17T00:01:25.7544745Z         	* 2021-02-17 00:00:59.532875 N | etcdserver/membership: set the initial cluster version to 3.4
2021-02-17T00:01:25.7545990Z         	* 2021-02-17 00:00:59.533626 I | etcdserver/api: enabled capabilities for version 3.4
2021-02-17T00:01:25.7547957Z         	* 2021-02-17 00:00:59.551573 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
2021-02-17T00:01:25.7549818Z         	* 2021-02-17 00:00:59.555394 I | embed: listening for metrics on http://127.0.0.1:2381
2021-02-17T00:01:25.7550924Z         	* 2021-02-17 00:00:59.555771 I | embed: listening for peers on 192.168.82.108:2380
2021-02-17T00:01:25.7551837Z         	* raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 is starting a new election at term 2
2021-02-17T00:01:25.7552803Z         	* raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 became candidate at term 3
2021-02-17T00:01:25.7554198Z         	* raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 received MsgVoteResp from 8bf199ee24c8c3e2 at term 3
2021-02-17T00:01:25.7555376Z         	* raft2021/02/17 00:01:00 INFO: 8bf199ee24c8c3e2 became leader at term 3
2021-02-17T00:01:25.7556461Z         	* raft2021/02/17 00:01:00 INFO: raft.node: 8bf199ee24c8c3e2 elected leader 8bf199ee24c8c3e2 at term 3
2021-02-17T00:01:25.7579690Z         	* 2021-02-17 00:01:00.898434 I | etcdserver: published {Name:functional-20210216235525-2779755 ClientURLs:[https://192.168.82.108:2379]} to cluster f398ff6fd447e89b
2021-02-17T00:01:25.7581550Z         	* 2021-02-17 00:01:00.898584 I | embed: ready to serve client requests
2021-02-17T00:01:25.7582599Z         	* 2021-02-17 00:01:00.901801 I | embed: serving client requests on 192.168.82.108:2379
2021-02-17T00:01:25.7583620Z         	* 2021-02-17 00:01:00.902770 I | embed: ready to serve client requests
2021-02-17T00:01:25.7584648Z         	* 2021-02-17 00:01:00.909680 I | embed: serving client requests on 127.0.0.1:2379
2021-02-17T00:01:25.7585744Z         	* 2021-02-17 00:01:22.638430 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7586427Z         	* 
2021-02-17T00:01:25.7586932Z         	* ==> etcd [8fd1325a18ee] <==
2021-02-17T00:01:25.7587815Z         	* 2021-02-16 23:57:52.464452 I | embed: ready to serve client requests
2021-02-17T00:01:25.7588828Z         	* 2021-02-16 23:57:52.465545 I | embed: ready to serve client requests
2021-02-17T00:01:25.7589843Z         	* 2021-02-16 23:57:52.466747 I | embed: serving client requests on 127.0.0.1:2379
2021-02-17T00:01:25.7590885Z         	* 2021-02-16 23:57:52.472829 I | embed: serving client requests on 192.168.82.108:2379
2021-02-17T00:01:25.7591972Z         	* 2021-02-16 23:58:16.034982 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7593110Z         	* 2021-02-16 23:58:19.927093 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7594266Z         	* 2021-02-16 23:58:29.925620 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7595407Z         	* 2021-02-16 23:58:39.925441 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7596558Z         	* 2021-02-16 23:58:49.925551 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7597699Z         	* 2021-02-16 23:58:59.925554 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7598850Z         	* 2021-02-16 23:59:09.925573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7600433Z         	* 2021-02-16 23:59:19.925451 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7601706Z         	* 2021-02-16 23:59:29.925560 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7602914Z         	* 2021-02-16 23:59:39.925644 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7604104Z         	* 2021-02-16 23:59:49.925434 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7605590Z         	* 2021-02-16 23:59:59.925511 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7607200Z         	* 2021-02-17 00:00:09.925431 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7608417Z         	* 2021-02-17 00:00:19.925514 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7609579Z         	* 2021-02-17 00:00:29.928682 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7610726Z         	* 2021-02-17 00:00:39.925484 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7612079Z         	* 2021-02-17 00:00:49.925819 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-17T00:01:25.7613336Z         	* 2021-02-17 00:00:57.845440 N | pkg/osutil: received terminated signal, shutting down...
2021-02-17T00:01:25.7615432Z         	* WARNING: 2021/02/17 00:00:57 grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
2021-02-17T00:01:25.7617547Z         	* 2021-02-17 00:00:57.854805 I | etcdserver: skipped leadership transfer for single voting member cluster
2021-02-17T00:01:25.7619372Z         	* WARNING: 2021/02/17 00:00:57 grpc: addrConn.createTransport failed to connect to {192.168.82.108:2379  <nil> 0 <nil>}. Err :connection error: desc = "transport: Error while dialing dial tcp 192.168.82.108:2379: connect: connection refused". Reconnecting...
2021-02-17T00:01:25.7620784Z         	* 
2021-02-17T00:01:25.7621255Z         	* ==> kernel <==
2021-02-17T00:01:25.7621870Z         	*  00:01:24 up 27 days, 21:57,  0 users,  load average: 4.69, 3.22, 1.88
2021-02-17T00:01:25.7623252Z         	* Linux functional-20210216235525-2779755 5.4.0-1029-aws #30-Ubuntu SMP Tue Oct 20 10:08:09 UTC 2020 aarch64 aarch64 aarch64 GNU/Linux
2021-02-17T00:01:25.7624316Z         	* PRETTY_NAME="Ubuntu 20.04.1 LTS"
2021-02-17T00:01:25.7624843Z         	* 
2021-02-17T00:01:25.7625598Z         	* ==> kube-apiserver [a07ef8bb4a5d] <==
2021-02-17T00:01:25.7626685Z         	* I0217 00:01:22.429463       1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
2021-02-17T00:01:25.7628103Z         	* I0217 00:01:22.429636       1 available_controller.go:475] Starting AvailableConditionController
2021-02-17T00:01:25.7629643Z         	* I0217 00:01:22.429648       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
2021-02-17T00:01:25.7630913Z         	* I0217 00:01:22.481887       1 controller.go:86] Starting OpenAPI controller
2021-02-17T00:01:25.7632060Z         	* I0217 00:01:22.482086       1 naming_controller.go:291] Starting NamingConditionController
2021-02-17T00:01:25.7633385Z         	* I0217 00:01:22.482141       1 establishing_controller.go:76] Starting EstablishingController
2021-02-17T00:01:25.7635145Z         	* I0217 00:01:22.482342       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
2021-02-17T00:01:25.7638216Z         	* I0217 00:01:22.482710       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
2021-02-17T00:01:25.7640369Z         	* I0217 00:01:22.482746       1 crd_finalizer.go:266] Starting CRDFinalizer
2021-02-17T00:01:25.7641773Z         	* I0217 00:01:22.691475       1 crdregistration_controller.go:111] Starting crd-autoregister controller
2021-02-17T00:01:25.7643443Z         	* I0217 00:01:22.691631       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
2021-02-17T00:01:25.7645426Z         	* I0217 00:01:22.691734       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
2021-02-17T00:01:25.7647394Z         	* I0217 00:01:22.692223       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
2021-02-17T00:01:25.7648930Z         	* I0217 00:01:22.891716       1 shared_informer.go:247] Caches are synced for crd-autoregister 
2021-02-17T00:01:25.7650524Z         	* E0217 00:01:22.920693       1 controller.go:152] Unable to remove old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
2021-02-17T00:01:25.7652389Z         	* I0217 00:01:22.933643       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2021-02-17T00:01:25.7653992Z         	* I0217 00:01:22.942843       1 cache.go:39] Caches are synced for AvailableConditionController controller
2021-02-17T00:01:25.7655259Z         	* I0217 00:01:22.949532       1 cache.go:39] Caches are synced for autoregister controller
2021-02-17T00:01:25.7656435Z         	* I0217 00:01:22.950206       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
2021-02-17T00:01:25.7657765Z         	* I0217 00:01:22.950929       1 apf_controller.go:266] Running API Priority and Fairness config worker
2021-02-17T00:01:25.7659097Z         	* I0217 00:01:22.997605       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
2021-02-17T00:01:25.7660689Z         	* I0217 00:01:23.000657       1 shared_informer.go:247] Caches are synced for node_authorizer 
2021-02-17T00:01:25.7661983Z         	* I0217 00:01:23.421440       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2021-02-17T00:01:25.7663706Z         	* I0217 00:01:23.421480       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2021-02-17T00:01:25.7665360Z         	* I0217 00:01:23.452034       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
2021-02-17T00:01:25.7666275Z         	* 
2021-02-17T00:01:25.7667112Z         	* ==> kube-apiserver [f61b22da999c] <==
2021-02-17T00:01:25.7668152Z         	* I0217 00:01:08.748768       1 establishing_controller.go:76] Starting EstablishingController
2021-02-17T00:01:25.7669877Z         	* I0217 00:01:08.748780       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
2021-02-17T00:01:25.7672503Z         	* I0217 00:01:08.748796       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
2021-02-17T00:01:25.7674800Z         	* I0217 00:01:08.748815       1 crd_finalizer.go:266] Starting CRDFinalizer
2021-02-17T00:01:25.7676562Z         	* I0217 00:01:08.748843       1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
2021-02-17T00:01:25.7780570Z         	* I0217 00:01:08.748899       1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
2021-02-17T00:01:25.7782716Z         	* I0217 00:01:08.786444       1 crdregistration_controller.go:111] Starting crd-autoregister controller
2021-02-17T00:01:25.7784338Z         	* I0217 00:01:08.786463       1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
2021-02-17T00:01:25.7785589Z         	* I0217 00:01:08.909921       1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
2021-02-17T00:01:25.7787186Z         	* I0217 00:01:08.909953       1 shared_informer.go:247] Caches are synced for crd-autoregister 
2021-02-17T00:01:25.7788609Z         	* I0217 00:01:08.909971       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
2021-02-17T00:01:25.7790663Z         	* I0217 00:01:08.922944       1 cache.go:39] Caches are synced for AvailableConditionController controller
2021-02-17T00:01:25.7792008Z         	* I0217 00:01:08.923459       1 apf_controller.go:266] Running API Priority and Fairness config worker
2021-02-17T00:01:25.7793095Z         	* I0217 00:01:08.923869       1 cache.go:39] Caches are synced for autoregister controller
2021-02-17T00:01:25.7794133Z         	* I0217 00:01:08.981381       1 shared_informer.go:247] Caches are synced for node_authorizer 
2021-02-17T00:01:25.7796762Z         	* I0217 00:01:09.570927       1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
2021-02-17T00:01:25.7798660Z         	* I0217 00:01:09.570951       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
2021-02-17T00:01:25.7800407Z         	* I0217 00:01:09.604291       1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
2021-02-17T00:01:25.7801842Z         	* I0217 00:01:10.573333       1 controller.go:609] quota admission added evaluator for: serviceaccounts
2021-02-17T00:01:25.7803075Z         	* I0217 00:01:10.590931       1 controller.go:609] quota admission added evaluator for: deployments.apps
2021-02-17T00:01:25.7804464Z         	* I0217 00:01:10.637711       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
2021-02-17T00:01:25.7805940Z         	* I0217 00:01:10.651065       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
2021-02-17T00:01:25.7807770Z         	* I0217 00:01:10.656298       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
2021-02-17T00:01:25.7809965Z         	* I0217 00:01:12.307049       1 controller.go:609] quota admission added evaluator for: events.events.k8s.io
2021-02-17T00:01:25.7811581Z         	* I0217 00:01:12.980312       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
2021-02-17T00:01:25.7812543Z         	* 
2021-02-17T00:01:25.7813566Z         	* ==> kube-controller-manager [4bf1331ef083] <==
2021-02-17T00:01:25.7815339Z         	* I0216 23:58:17.971424       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving 
2021-02-17T00:01:25.7817523Z         	* I0216 23:58:17.971431       1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client 
2021-02-17T00:01:25.7818952Z         	* I0216 23:58:17.978572       1 shared_informer.go:247] Caches are synced for daemon sets 
2021-02-17T00:01:25.7819890Z         	* I0216 23:58:17.980580       1 shared_informer.go:247] Caches are synced for job 
2021-02-17T00:01:25.7820824Z         	* I0216 23:58:17.996091       1 shared_informer.go:247] Caches are synced for endpoint 
2021-02-17T00:01:25.7821839Z         	* I0216 23:58:18.001644       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
2021-02-17T00:01:25.7823371Z         	* I0216 23:58:18.044493       1 range_allocator.go:373] Set node functional-20210216235525-2779755 PodCIDR to [10.244.0.0/24]
2021-02-17T00:01:25.7824571Z         	* I0216 23:58:18.070873       1 shared_informer.go:247] Caches are synced for attach detach 
2021-02-17T00:01:25.7825574Z         	* I0216 23:58:18.072738       1 shared_informer.go:247] Caches are synced for deployment 
2021-02-17T00:01:25.7826572Z         	* I0216 23:58:18.081999       1 shared_informer.go:247] Caches are synced for disruption 
2021-02-17T00:01:25.7827544Z         	* I0216 23:58:18.082020       1 disruption.go:339] Sending events to api server.
2021-02-17T00:01:25.7829562Z         	* I0216 23:58:18.126753       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 2"
2021-02-17T00:01:25.7832456Z         	* I0216 23:58:18.126787       1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lvfk2"
2021-02-17T00:01:25.7834109Z         	* I0216 23:58:18.128131       1 shared_informer.go:247] Caches are synced for ReplicaSet 
2021-02-17T00:01:25.7835159Z         	* I0216 23:58:18.128311       1 shared_informer.go:247] Caches are synced for persistent volume 
2021-02-17T00:01:25.7836208Z         	* I0216 23:58:18.171707       1 shared_informer.go:247] Caches are synced for resource quota 
2021-02-17T00:01:25.7837214Z         	* I0216 23:58:18.197220       1 shared_informer.go:247] Caches are synced for resource quota 
2021-02-17T00:01:25.7839315Z         	* I0216 23:58:18.217117       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-9jwcl"
2021-02-17T00:01:25.7842651Z         	* I0216 23:58:18.226678       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-5gqfl"
2021-02-17T00:01:25.7844831Z         	* I0216 23:58:18.349094       1 shared_informer.go:240] Waiting for caches to sync for garbage collector
2021-02-17T00:01:25.7846126Z         	* I0216 23:58:18.620820       1 shared_informer.go:247] Caches are synced for garbage collector 
2021-02-17T00:01:25.7847521Z         	* I0216 23:58:18.620850       1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
2021-02-17T00:01:25.7848864Z         	* I0216 23:58:18.649273       1 shared_informer.go:247] Caches are synced for garbage collector 
2021-02-17T00:01:25.7851179Z         	* I0216 23:58:18.882898       1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-74ff55c5b to 1"
2021-02-17T00:01:25.7854323Z         	* I0216 23:58:18.895151       1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-74ff55c5b-5gqfl"
2021-02-17T00:01:25.7855824Z         	* 
2021-02-17T00:01:25.7856719Z         	* ==> kube-controller-manager [71db52d9a3e8] <==
2021-02-17T00:01:25.7857721Z         	* I0217 00:01:14.738870       1 shared_informer.go:247] Caches are synced for token_cleaner 
2021-02-17T00:01:25.7859363Z         	* I0217 00:01:14.891621       1 node_ipam_controller.go:91] Sending events to api server.
2021-02-17T00:01:25.7861902Z         	* W0217 00:01:15.156379       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Secret ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7865221Z         	* W0217 00:01:15.156463       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ServiceAccount ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7869025Z         	* E0217 00:01:22.784748       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ServiceAccount: failed to list *v1.ServiceAccount: serviceaccounts is forbidden: User "system:kube-controller-manager" cannot list resource "serviceaccounts" in API group "" at the cluster scope
2021-02-17T00:01:25.7872509Z         	* E0217 00:01:22.790669       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Secret: failed to list *v1.Secret: secrets is forbidden: User "system:kube-controller-manager" cannot list resource "secrets" in API group "" at the cluster scope
2021-02-17T00:01:25.7874776Z         	* I0217 00:01:24.895161       1 range_allocator.go:82] Sending events to api server.
2021-02-17T00:01:25.7876058Z         	* I0217 00:01:24.895278       1 range_allocator.go:116] No Secondary Service CIDR provided. Skipping filtering out secondary service addresses.
2021-02-17T00:01:25.7877370Z         	* I0217 00:01:24.895312       1 controllermanager.go:554] Started "nodeipam"
2021-02-17T00:01:25.7878397Z         	* I0217 00:01:24.895870       1 node_ipam_controller.go:159] Starting ipam controller
2021-02-17T00:01:25.7879379Z         	* I0217 00:01:24.895886       1 shared_informer.go:240] Waiting for caches to sync for node
2021-02-17T00:01:25.7880778Z         	* I0217 00:01:24.896247       1 shared_informer.go:240] Waiting for caches to sync for resource quota
2021-02-17T00:01:25.7884134Z         	* W0217 00:01:24.924651       1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="functional-20210216235525-2779755" does not exist
2021-02-17T00:01:25.7886039Z         	* I0217 00:01:24.939044       1 shared_informer.go:247] Caches are synced for service account 
2021-02-17T00:01:25.7887049Z         	* I0217 00:01:24.948379       1 shared_informer.go:247] Caches are synced for crt configmap 
2021-02-17T00:01:25.7888466Z         	* I0217 00:01:24.959433       1 shared_informer.go:247] Caches are synced for namespace 
2021-02-17T00:01:25.7889640Z         	* I0217 00:01:24.988829       1 shared_informer.go:247] Caches are synced for expand 
2021-02-17T00:01:25.7891265Z         	* I0217 00:01:24.989039       1 shared_informer.go:247] Caches are synced for certificate-csrapproving 
2021-02-17T00:01:25.7892490Z         	* I0217 00:01:24.989085       1 shared_informer.go:247] Caches are synced for bootstrap_signer 
2021-02-17T00:01:25.7893652Z         	* I0217 00:01:24.991191       1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator 
2021-02-17T00:01:25.7894759Z         	* I0217 00:01:24.996098       1 shared_informer.go:247] Caches are synced for node 
2021-02-17T00:01:25.7895708Z         	* I0217 00:01:24.996226       1 range_allocator.go:172] Starting range CIDR allocator
2021-02-17T00:01:25.7896727Z         	* I0217 00:01:24.996250       1 shared_informer.go:240] Waiting for caches to sync for cidrallocator
2021-02-17T00:01:25.7897796Z         	* I0217 00:01:24.996281       1 shared_informer.go:247] Caches are synced for cidrallocator 
2021-02-17T00:01:25.7898762Z         	* I0217 00:01:25.010171       1 shared_informer.go:247] Caches are synced for TTL 
2021-02-17T00:01:25.7899424Z         	* 
2021-02-17T00:01:25.7900114Z         	* ==> kube-proxy [5125090049bc] <==
2021-02-17T00:01:25.7900900Z         	* I0216 23:58:20.295618       1 node.go:172] Successfully retrieved node IP: 192.168.82.108
2021-02-17T00:01:25.7902345Z         	* I0216 23:58:20.295698       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.82.108), assume IPv4 operation
2021-02-17T00:01:25.7903508Z         	* W0216 23:58:20.375690       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
2021-02-17T00:01:25.7904476Z         	* I0216 23:58:20.375784       1 server_others.go:185] Using iptables Proxier.
2021-02-17T00:01:25.7905287Z         	* I0216 23:58:20.380839       1 server.go:650] Version: v1.20.2
2021-02-17T00:01:25.7906097Z         	* I0216 23:58:20.381254       1 conntrack.go:52] Setting nf_conntrack_max to 131072
2021-02-17T00:01:25.7908418Z         	* I0216 23:58:20.381323       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
2021-02-17T00:01:25.7910303Z         	* I0216 23:58:20.381353       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
2021-02-17T00:01:25.7911576Z         	* I0216 23:58:20.387720       1 config.go:315] Starting service config controller
2021-02-17T00:01:25.7912573Z         	* I0216 23:58:20.387738       1 shared_informer.go:240] Waiting for caches to sync for service config
2021-02-17T00:01:25.7913832Z         	* I0216 23:58:20.396779       1 config.go:224] Starting endpoint slice config controller
2021-02-17T00:01:25.7914894Z         	* I0216 23:58:20.398104       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
2021-02-17T00:01:25.7915977Z         	* I0216 23:58:20.487872       1 shared_informer.go:247] Caches are synced for service config 
2021-02-17T00:01:25.7917021Z         	* I0216 23:58:20.498664       1 shared_informer.go:247] Caches are synced for endpoint slice config 
2021-02-17T00:01:25.7918167Z         	* 
2021-02-17T00:01:25.7919260Z         	* ==> kube-proxy [60cc59b48112] <==
2021-02-17T00:01:25.7920192Z         	* I0217 00:01:08.977998       1 node.go:172] Successfully retrieved node IP: 192.168.82.108
2021-02-17T00:01:25.7921737Z         	* I0217 00:01:08.978264       1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.82.108), assume IPv4 operation
2021-02-17T00:01:25.7922891Z         	* W0217 00:01:09.011677       1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
2021-02-17T00:01:25.7923879Z         	* I0217 00:01:09.015080       1 server_others.go:185] Using iptables Proxier.
2021-02-17T00:01:25.7924683Z         	* I0217 00:01:09.015281       1 server.go:650] Version: v1.20.2
2021-02-17T00:01:25.7925641Z         	* I0217 00:01:09.015739       1 conntrack.go:52] Setting nf_conntrack_max to 131072
2021-02-17T00:01:25.7926575Z         	* I0217 00:01:09.016506       1 config.go:315] Starting service config controller
2021-02-17T00:01:25.7927566Z         	* I0217 00:01:09.016523       1 shared_informer.go:240] Waiting for caches to sync for service config
2021-02-17T00:01:25.7928582Z         	* I0217 00:01:09.018671       1 config.go:224] Starting endpoint slice config controller
2021-02-17T00:01:25.7929646Z         	* I0217 00:01:09.018683       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
2021-02-17T00:01:25.7930704Z         	* I0217 00:01:09.116645       1 shared_informer.go:247] Caches are synced for service config 
2021-02-17T00:01:25.7931759Z         	* I0217 00:01:09.118788       1 shared_informer.go:247] Caches are synced for endpoint slice config 
2021-02-17T00:01:25.7934073Z         	* W0217 00:01:15.157304       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.EndpointSlice ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7937031Z         	* W0217 00:01:15.157404       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7941666Z         	* E0217 00:01:16.183512       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8441/api/v1/services?labelSelector=%21service.kubernetes.io%2Fheadless%2C%21service.kubernetes.io%2Fservice-proxy-name&resourceVersion=592": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7944391Z         	* 
2021-02-17T00:01:25.7945234Z         	* ==> kube-scheduler [9f35eeb44c8f] <==
2021-02-17T00:01:25.7947201Z         	* W0217 00:01:15.156806       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.CSINode ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7949992Z         	* W0217 00:01:15.156846       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Pod ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7953592Z         	* W0217 00:01:15.156889       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolumeClaim ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7957052Z         	* W0217 00:01:15.156928       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1beta1.PodDisruptionBudget ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7960809Z         	* W0217 00:01:15.156963       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.ReplicationController ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7963961Z         	* W0217 00:01:15.157001       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.PersistentVolume ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7967061Z         	* W0217 00:01:15.157040       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.StorageClass ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7969980Z         	* W0217 00:01:15.157080       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7972995Z         	* W0217 00:01:15.159086       1 reflector.go:436] k8s.io/client-go/informers/factory.go:134: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:134: Unexpected watch close - watch lasted less than a second and no items received
2021-02-17T00:01:25.7977058Z         	* E0217 00:01:15.974491       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.82.108:8441/api/v1/replicationcontrollers?resourceVersion=582": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7980931Z         	* E0217 00:01:16.044069       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.82.108:8441/apis/storage.k8s.io/v1/storageclasses?resourceVersion=582": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7985612Z         	* E0217 00:01:16.108933       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.82.108:8441/apis/apps/v1/statefulsets?resourceVersion=582": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7989175Z         	* E0217 00:01:16.189947       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.82.108:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&resourceVersion=603": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.7992944Z         	* E0217 00:01:22.803828       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.7997070Z         	* E0217 00:01:22.804049       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2021-02-17T00:01:25.8002135Z         	* E0217 00:01:22.804260       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
2021-02-17T00:01:25.8007098Z         	* E0217 00:01:22.804335       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2021-02-17T00:01:25.8011639Z         	* E0217 00:01:22.808221       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-02-17T00:01:25.8015822Z         	* E0217 00:01:22.808327       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8019266Z         	* E0217 00:01:22.808379       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8023285Z         	* E0217 00:01:22.808528       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
2021-02-17T00:01:25.8026913Z         	* E0217 00:01:22.808701       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
2021-02-17T00:01:25.8029729Z         	* E0217 00:01:22.808829       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
2021-02-17T00:01:25.8032548Z         	* E0217 00:01:22.808930       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2021-02-17T00:01:25.8035850Z         	* E0217 00:01:22.809065       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.8038112Z         	* 
2021-02-17T00:01:25.8039062Z         	* ==> kube-scheduler [f299474e9f3c] <==
2021-02-17T00:01:25.8040797Z         	* I0216 23:57:55.081863       1 serving.go:331] Generated self-signed cert in-memory
2021-02-17T00:01:25.8043980Z         	* W0216 23:57:59.499312       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
2021-02-17T00:01:25.8048744Z         	* W0216 23:57:59.499512       1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
2021-02-17T00:01:25.8051373Z         	* W0216 23:57:59.499600       1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
2021-02-17T00:01:25.8053808Z         	* W0216 23:57:59.499678       1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
2021-02-17T00:01:25.8055545Z         	* I0216 23:57:59.540243       1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
2021-02-17T00:01:25.8057889Z         	* I0216 23:57:59.542337       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-02-17T00:01:25.8060665Z         	* I0216 23:57:59.542978       1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
2021-02-17T00:01:25.8062627Z         	* I0216 23:57:59.543078       1 tlsconfig.go:240] Starting DynamicServingCertificateController
2021-02-17T00:01:25.8065478Z         	* E0216 23:57:59.543966       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8069178Z         	* E0216 23:57:59.545114       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
2021-02-17T00:01:25.8073152Z         	* E0216 23:57:59.545362       1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
2021-02-17T00:01:25.8080136Z         	* E0216 23:57:59.546148       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
2021-02-17T00:01:25.8087699Z         	* E0216 23:57:59.548607       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
2021-02-17T00:01:25.8092265Z         	* E0216 23:57:59.549854       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-02-17T00:01:25.8095729Z         	* E0216 23:57:59.550907       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
2021-02-17T00:01:25.8099195Z         	* E0216 23:57:59.562699       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
2021-02-17T00:01:25.8103457Z         	* E0216 23:57:59.563182       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.8107607Z         	* E0216 23:57:59.563416       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
2021-02-17T00:01:25.8111158Z         	* E0216 23:57:59.563845       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
2021-02-17T00:01:25.8114406Z         	* E0216 23:57:59.564278       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
2021-02-17T00:01:25.8118791Z         	* E0216 23:58:00.417302       1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
2021-02-17T00:01:25.8122746Z         	* I0216 23:58:01.043172       1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
2021-02-17T00:01:25.8124072Z         	* 
2021-02-17T00:01:25.8124550Z         	* ==> kubelet <==
2021-02-17T00:01:25.8125445Z         	* -- Logs begin at Tue 2021-02-16 23:57:11 UTC, end at Wed 2021-02-17 00:01:25 UTC. --
2021-02-17T00:01:25.8130830Z         	* Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.780173   15475 status_manager.go:550] Failed to get status for pod "kube-controller-manager-functional-20210216235525-2779755_kube-system(57b8c22dbe6410e4bd36cf14b0f8bdc7)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-20210216235525-2779755": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.8136807Z         	* Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.844570   15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-9jwcl through plugin: invalid network status for
2021-02-17T00:01:25.8140037Z         	* Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.849077   15475 pod_container_deletor.go:79] Container "8416187a50e920331a49c3bbf146074f2c32bb228f3808050534a82ffd8dbef7" not found in pod's containers
2021-02-17T00:01:25.8144851Z         	* Feb 17 00:01:15 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:15.979475   15475 status_manager.go:550] Failed to get status for pod "nginx-svc_default(e262f289-58b0-4c41-aad0-b1f27b215a87)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/default/pods/nginx-svc": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.8148781Z         	* Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.026809   15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8151970Z         	* Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.074189   15475 pod_container_deletor.go:79] Container "b501d6f1e41731aba59158fda9d32800d305eba9db75cacf081ac9ef75c2233b" not found in pod's containers
2021-02-17T00:01:25.8155789Z         	* Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.081372   15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8158881Z         	* Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.082831   15475 pod_container_deletor.go:79] Container "8472496ad852e71254ec44abcc9960f802b1dd67d9bdf2ffb853cc3c07c4cb42" not found in pod's containers
2021-02-17T00:01:25.8162532Z         	* Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.134223   15475 pod_container_deletor.go:79] Container "c48c263e44c9c2f75bc3f7c5a42c1ff3b9db3bbe83f3a81c18c1553d091d6d80" not found in pod's containers
2021-02-17T00:01:25.8166210Z         	* Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:16.137587   15475 scope.go:95] [topologymanager] RemoveContainer - Container ID: f2e3cd415a888cf60100fbf5fb58a54a47731dddeb32463a3d8e1aa8ac3a8d09
2021-02-17T00:01:25.8170743Z         	* Feb 17 00:01:16 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:16.241862   15475 status_manager.go:550] Failed to get status for pod "kube-proxy-lvfk2_kube-system(22e1a123-9634-4fec-8a72-1034b1968f87)": Get "https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/pods/kube-proxy-lvfk2": dial tcp 192.168.82.108:8441: connect: connection refused
2021-02-17T00:01:25.8175424Z         	* Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:17.140694   15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-9jwcl through plugin: invalid network status for
2021-02-17T00:01:25.8178903Z         	* Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:17.157909   15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8182170Z         	* Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:17.163462   15475 kubelet.go:1621] Trying to delete pod kube-apiserver-functional-20210216235525-2779755_kube-system bba41377-5d01-4c3c-984c-eb882846f88c
2021-02-17T00:01:25.8186597Z         	* Feb 17 00:01:17 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:17.579165   15475 request.go:655] Throttling request took 1.064157484s, request: GET:https://control-plane.minikube.internal:8441/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%3Dkube-proxy-token-b877h&resourceVersion=582
2021-02-17T00:01:25.8191001Z         	* Feb 17 00:01:19 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:19.227718   15475 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for default/nginx-svc through plugin: invalid network status for
2021-02-17T00:01:25.8195727Z         	* Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.791224   15475 reflector.go:138] object-"kube-system"/"coredns-token-z5pj2": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "coredns-token-z5pj2" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8201706Z         	* Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.796483   15475 reflector.go:138] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8208750Z         	* Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.797132   15475 reflector.go:138] object-"kube-system"/"storage-provisioner-token-jhjgp": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "storage-provisioner-token-jhjgp" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8214870Z         	* Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.798204   15475 reflector.go:138] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8220588Z         	* Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.798735   15475 reflector.go:138] object-"kube-system"/"kube-proxy-token-b877h": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "kube-proxy-token-b877h" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8226671Z         	* Feb 17 00:01:22 functional-20210216235525-2779755 kubelet[15475]: E0217 00:01:22.800718   15475 reflector.go:138] object-"default"/"default-token-8ljbt": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "default-token-8ljbt" is forbidden: User "system:node:functional-20210216235525-2779755" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node 'functional-20210216235525-2779755' and this object
2021-02-17T00:01:25.8231533Z         	* Feb 17 00:01:23 functional-20210216235525-2779755 kubelet[15475]: W0217 00:01:23.099246   15475 kubelet.go:1625] Deleted mirror pod "kube-apiserver-functional-20210216235525-2779755_kube-system(bba41377-5d01-4c3c-984c-eb882846f88c)" because it is outdated
2021-02-17T00:01:25.8236168Z         	* Feb 17 00:01:23 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:23.262057   15475 kubelet.go:1621] Trying to delete pod kube-apiserver-functional-20210216235525-2779755_kube-system bba41377-5d01-4c3c-984c-eb882846f88c
2021-02-17T00:01:25.8239838Z         	* Feb 17 00:01:24 functional-20210216235525-2779755 kubelet[15475]: I0217 00:01:24.525885   15475 kubelet.go:1621] Trying to delete pod kube-apiserver-functional-20210216235525-2779755_kube-system bba41377-5d01-4c3c-984c-eb882846f88c
2021-02-17T00:01:25.8242367Z         	* 
2021-02-17T00:01:25.8243238Z         	* ==> storage-provisioner [65452e92862d] <==
2021-02-17T00:01:25.8244306Z         	* I0216 23:58:24.667869       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
2021-02-17T00:01:25.8245640Z         	* I0216 23:58:24.753470       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
2021-02-17T00:01:25.8248407Z         	* I0216 23:58:24.753506       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
2021-02-17T00:01:25.8250329Z         	* I0216 23:58:24.799196       1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
2021-02-17T00:01:25.8254181Z         	* I0216 23:58:24.809684       1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e3dffc90-2681-49a4-b60e-fc0704798284", APIVersion:"v1", ResourceVersion:"466", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-20210216235525-2779755_b9199e52-32a5-4177-a720-95d6ad979d84 became leader
2021-02-17T00:01:25.8257942Z         	* I0216 23:58:24.809802       1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_functional-20210216235525-2779755_b9199e52-32a5-4177-a720-95d6ad979d84!
2021-02-17T00:01:25.8260411Z         	* I0216 23:58:24.911352       1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_functional-20210216235525-2779755_b9199e52-32a5-4177-a720-95d6ad979d84!
2021-02-17T00:01:25.8264198Z         	* E0217 00:00:57.833611       1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.PersistentVolumeClaim: Get "https://10.96.0.1:443/api/v1/persistentvolumeclaims?allowWatchBookmarks=true&resourceVersion=1&timeout=5m19s&timeoutSeconds=319&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.8268438Z         	* E0217 00:00:57.833656       1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.StorageClass: Get "https://10.96.0.1:443/apis/storage.k8s.io/v1/storageclasses?allowWatchBookmarks=true&resourceVersion=457&timeout=8m26s&timeoutSeconds=506&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.8272578Z         	* E0217 00:00:57.833681       1 reflector.go:307] pkg/mod/k8s.io/client-go@v0.17.3/tools/cache/reflector.go:105: Failed to watch *v1.PersistentVolume: Get "https://10.96.0.1:443/api/v1/persistentvolumes?allowWatchBookmarks=true&resourceVersion=1&timeout=7m7s&timeoutSeconds=427&watch=true": dial tcp 10.96.0.1:443: connect: connection refused
2021-02-17T00:01:25.8274749Z         	* 
2021-02-17T00:01:25.8275633Z         	* ==> storage-provisioner [aa25c43bff27] <==
2021-02-17T00:01:25.8276737Z         	* I0217 00:01:05.578036       1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
2021-02-17T00:01:25.8278083Z         	* I0217 00:01:08.947015       1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
2021-02-17T00:01:25.8280179Z         	* I0217 00:01:08.982874       1 leaderelection.go:242] attempting to acquire leader lease  kube-system/k8s.io-minikube-hostpath...
2021-02-17T00:01:25.8281263Z         
2021-02-17T00:01:25.8281896Z         -- /stdout --
2021-02-17T00:01:25.8283569Z     helpers_test.go:250: (dbg) Run:  ./minikube-linux-arm64 status --format={{.APIServer}} -p functional-20210216235525-2779755 -n functional-20210216235525-2779755
2021-02-17T00:01:26.1719266Z     helpers_test.go:257: (dbg) Run:  kubectl --context functional-20210216235525-2779755 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
2021-02-17T00:01:26.2753885Z     helpers_test.go:263: non-running pods: kube-apiserver-functional-20210216235525-2779755
2021-02-17T00:01:26.2755978Z     helpers_test.go:265: ======> post-mortem[TestFunctional/parallel/DockerEnv]: describe non-running pods <======
2021-02-17T00:01:26.2758367Z     helpers_test.go:268: (dbg) Run:  kubectl --context functional-20210216235525-2779755 describe pod kube-apiserver-functional-20210216235525-2779755
2021-02-17T00:01:26.3845029Z     helpers_test.go:268: (dbg) Non-zero exit: kubectl --context functional-20210216235525-2779755 describe pod kube-apiserver-functional-20210216235525-2779755: exit status 1 (108.714473ms)
2021-02-17T00:01:26.3846817Z         
2021-02-17T00:01:26.3847290Z         ** stderr ** 
2021-02-17T00:01:26.3849016Z         	Error from server (NotFound): pods "kube-apiserver-functional-20210216235525-2779755" not found
2021-02-17T00:01:26.3850295Z         
2021-02-17T00:01:26.3850744Z         ** /stderr **
2021-02-17T00:01:26.3852677Z     helpers_test.go:270: kubectl --context functional-20210216235525-2779755 describe pod kube-apiserver-functional-20210216235525-2779755: exit status 1
@ilya-zuyev ilya-zuyev added kind/flake Categorizes issue or PR as related to a flaky test. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. labels Feb 17, 2021
@priyawadhwa priyawadhwa added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Feb 24, 2021
@medyagh
Copy link
Member

medyagh commented Feb 26, 2021

@ilya-zuyev is this test still failing ?

@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Apr 21, 2021
@medyagh
Copy link
Member

medyagh commented May 3, 2021

havent seen anymore

@medyagh medyagh closed this as completed May 3, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

4 participants