Log file created at: 2021/03/24 20:37:29 Running on machine: ubuntu Binary: Built with gc go1.16 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0324 20:37:29.817709 32984 out.go:239] Setting OutFile to fd 1 ... I0324 20:37:29.818086 32984 out.go:291] isatty.IsTerminal(1) = true I0324 20:37:29.818091 32984 out.go:252] Setting ErrFile to fd 2... I0324 20:37:29.818096 32984 out.go:291] isatty.IsTerminal(2) = true I0324 20:37:29.818198 32984 root.go:308] Updating PATH: /root/.minikube/bin I0324 20:37:29.818375 32984 mustload.go:66] Loading cluster: minikube I0324 20:37:29.819028 32984 exec_runner.go:52] Run: systemctl --version I0324 20:37:29.822489 32984 kubeconfig.go:93] found "minikube" server: "https://192.168.1.6:8443" I0324 20:37:29.822504 32984 api_server.go:146] Checking apiserver status ... I0324 20:37:29.822537 32984 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:29.977479 32984 exec_runner.go:52] Run: sudo egrep ^[0-9]+:freezer: /proc/31093/cgroup I0324 20:37:29.987429 32984 api_server.go:162] apiserver freezer: "2:freezer:/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205" I0324 20:37:29.987483 32984 exec_runner.go:52] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205/freezer.state I0324 20:37:29.996292 32984 api_server.go:184] freezer state: "THAWED" I0324 20:37:29.996316 32984 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:30.003976 32984 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: ok I0324 20:37:30.003989 32984 host.go:66] Checking if "minikube" exists ... I0324 20:37:30.004248 32984 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}} I0324 20:37:30.062325 32984 logs.go:255] 1 containers: [ee9219711a8d] I0324 20:37:30.062418 32984 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}} I0324 20:37:30.115035 32984 logs.go:255] 1 containers: [4c376b899994] I0324 20:37:30.115089 32984 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}} I0324 20:37:30.164641 32984 logs.go:255] 1 containers: [1cc69d6af318] I0324 20:37:30.164694 32984 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}} I0324 20:37:30.215139 32984 logs.go:255] 1 containers: [05b06e91f987] I0324 20:37:30.215207 32984 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}} I0324 20:37:30.266250 32984 logs.go:255] 1 containers: [954a17c3f5ef] I0324 20:37:30.266309 32984 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}} I0324 20:37:30.319048 32984 logs.go:255] 0 containers: [] W0324 20:37:30.319062 32984 logs.go:257] No container was found matching "kubernetes-dashboard" I0324 20:37:30.319107 32984 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}} I0324 20:37:30.369674 32984 logs.go:255] 1 containers: [696e909b7688] I0324 20:37:30.369732 32984 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}} I0324 20:37:30.418153 32984 logs.go:255] 1 containers: [69b34f133437] I0324 20:37:30.421327 32984 out.go:129] ==> Docker <== I0324 20:37:30.421393 32984 exec_runner.go:52] Run: /bin/bash -c "sudo journalctl -u docker -n 60" I0324 20:37:30.438233 32984 out.go:129] -- Logs begin at Fri 2021-03-19 21:31:59 EET, end at Wed 2021-03-24 20:37:30 EET. -- I0324 20:37:30.440224 32984 out.go:129] Mar 24 20:33:58 ubuntu dockerd[8910]: time="2021-03-24T20:33:58.980767286+02:00" level=info msg="ignoring event" container=445f838d106fe1ba29e55f7aa430634219b2c424b4ed08c783b62ac5b13d2212 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.441271 32984 out.go:129] Mar 24 20:33:58 ubuntu dockerd[8910]: time="2021-03-24T20:33:58.980790664+02:00" level=info msg="ignoring event" container=a70c4fd15d719010ae7d408570ace11bbd1367b549880a1b64404168c2a72b6b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.442688 32984 out.go:129] Mar 24 20:33:58 ubuntu dockerd[8910]: time="2021-03-24T20:33:58.992578864+02:00" level=info msg="ignoring event" container=52e498c6aa77ebf179b0409bb8f4d9bc0bace350e639ac0f5d416dae0693e31c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.443742 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.037967179+02:00" level=info msg="ignoring event" container=1a01067939d19f891ea82d2538768fa316b9a088cd540bd76cf0b5aee73ff2d1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.445113 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.038032664+02:00" level=info msg="ignoring event" container=2e33ae6fff8db4c4a0855cb7e379f6a7414f7df1671b766d48c55dafd55716eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.446363 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.038075827+02:00" level=info msg="ignoring event" container=c8e9c5799dd0491b218f0bf5f42808c31621e5f9423ab220717ad31bc295f6f9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.447505 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.052238508+02:00" level=info msg="ignoring event" container=b5b3c839a6362bdaec65ac7770d16cd7971cc61154686759c0d49983e9dd1b3a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.448636 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.052331287+02:00" level=info msg="ignoring event" container=3ca8bed690b69c20a0011420f71a5a3fa3c820e1507d0ac21a85c2568b9b03e4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.449611 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.060792629+02:00" level=info msg="ignoring event" container=fa14c2f9cc9c25366cbf8d65bbebd904976095a438e69dc522df345465c0f7aa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.450835 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.083012209+02:00" level=info msg="ignoring event" container=51bd19ac8a586fb2f2ca77e09cf2ad507b6692bd4ab5cba28906d2b71f7ce7d0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.452097 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.116277919+02:00" level=info msg="ignoring event" container=73b1ba94e4e4399d441b8dc89f9ebce43962ee486ed7605fb51748a0ca9bc31b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.453259 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.124112400+02:00" level=info msg="ignoring event" container=57318ade4dc2951d85051abfb7931bafe9ae4a963f719d0c687dce5b6dfc0d66 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.454339 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.124213233+02:00" level=info msg="ignoring event" container=727314c054e9d344f761cd699880fd6ae64bc31252c888c14683cb451339b07f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.455516 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.124257260+02:00" level=info msg="ignoring event" container=0b645d0348dac3f7a66e87edb923183b48f480d1bba498483035764dd8f523c5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.456950 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.128895825+02:00" level=info msg="ignoring event" container=be22d157ebffb7b46448bc010ae3134a6252b82fe76c93401e5738f27b824e55 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.458604 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.189795188+02:00" level=info msg="ignoring event" container=4832c83f8b5b981191dff76ca6ac161a9dd5bb77ef1e5b933f68cf1472bdbf95 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.459898 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.189868788+02:00" level=info msg="ignoring event" container=94f0e62d517d84be089d83d28d7cc91e06969102ee36cc9ac2e6b75de031d2ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.461262 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.203727478+02:00" level=info msg="ignoring event" container=c34c1a99a589bb54f078f16efbb58798995bad408505e370b16197c5524bfe24 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.462282 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.208515476+02:00" level=info msg="ignoring event" container=29f2b7a1ac050f11998daa82fe64fa00ff951352b37e04595ae7ac8dd496dd9c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.463460 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.208606190+02:00" level=info msg="ignoring event" container=9c6220c7d8fed1745e678ec33fcfc6fd0dcf286bd717c69f1028bb2a16fe04cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.464333 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.222061618+02:00" level=info msg="ignoring event" container=4c53e46065fcc52e0f3670c9a91c65dcf2a66b61a3cb39c6b64daf1a0dda43b7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.465154 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.222126737+02:00" level=info msg="ignoring event" container=7197b16824279b391f7646f296b0d52c186043b9d0c29accf6298ec7116b009b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.466091 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.222167049+02:00" level=info msg="ignoring event" container=b31e057e1f93ec35e5e248c7f1a406316d8bc2baa0a498a4d952e23e8771ddeb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.467090 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.222191765+02:00" level=info msg="ignoring event" container=dcbbb6e8e2b16c7b65909c99953846fe2ee4cd19f9b4761661fbf0d2c82f779f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.468294 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.222220134+02:00" level=info msg="ignoring event" container=f3c07553d6a0704770c5c0b0783e74e520f08d4bbf732547873dba0ac7f7fb18 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.469537 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.229936620+02:00" level=info msg="ignoring event" container=e7d5dadfbd22ee1924019a3b33f9a455f094b74b1c25eb111e85593197cc7da6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.470732 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.252628726+02:00" level=info msg="ignoring event" container=6274f84c5323d2373e9fc37ea3c4d683888228ec8ac9607618455d9875536881 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.472481 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.252673402+02:00" level=info msg="ignoring event" container=6fce30c7867ba0f3dfa055037029febd6f15763cb9520303707ce55620d73103 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.473642 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.252694832+02:00" level=info msg="ignoring event" container=ca2a89c1be4086317f904f329e9d7df00bf104cdf573f2ce9d9a186611be6724 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.474606 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.261016284+02:00" level=info msg="ignoring event" container=a5941a8b875c9e5b67b8f8db0c66ab6acdbd02e34742fc6a23763a2f9c6bc00e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.475534 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.278724034+02:00" level=info msg="ignoring event" container=528c01f3f4bec27a2fa475fdf1167df56e4a1ab43ee4d2bcc62f6d100abfced7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.476356 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.304470980+02:00" level=info msg="ignoring event" container=efc422f74696137839184f2be37b6f3944935d7efad32870401bc5a52015f4eb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.477201 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.304533935+02:00" level=info msg="ignoring event" container=1872386e0d1547a5fcb388afa044617d41c57399df73d9e9dae3d4beb4df7a29 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.478025 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.304566163+02:00" level=info msg="ignoring event" container=90309d81ec08717b1d06657ca2d53c0ead2bf11ac75dc9c2a3742bbc5dce0760 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.478836 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.304582332+02:00" level=info msg="ignoring event" container=9f665539fab976fd7c0b48b4b19d53e39a9cc71137f93441be5906eaa1186e27 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.479666 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.331006884+02:00" level=info msg="ignoring event" container=99f2eff23303047b98628d58d11fe869d0c147921b32fb9048bd725a36e71762 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.480366 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.331097381+02:00" level=info msg="ignoring event" container=73346a2e544e571c1adc6b5ce3506a459e4d58418ce03ad4c833b23db1637c7c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.481079 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.331153286+02:00" level=info msg="ignoring event" container=8d148ba52a5bddf4d456c909c42f857795e3b884c95b2d06ff42bda0fb36df36 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.481878 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.331170137+02:00" level=info msg="ignoring event" container=a7009eec23b4d7b57819006b58e56a27ef182f5a870b7bd2e5588da92c47eb17 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.482840 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.331190197+02:00" level=info msg="ignoring event" container=9f81d3f9257f439368b2352c73e488efe5fa21c715ef42d28ad6b51eaa57ea1b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.483508 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.331202501+02:00" level=info msg="ignoring event" container=702c4db2a5a2e38683970c0b29a2c0f0fcd0f28003de7e27083139774f5428ac module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.484433 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.333572790+02:00" level=info msg="ignoring event" container=088c8aae816da070ba97a101d921d54ba3933943c7d1fe144c0c5c0a896d4e6c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.485315 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.343876943+02:00" level=info msg="ignoring event" container=62e3710f855e2116d89dee70c3c34e31a1afbb01e8588367a368ded018d039f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.486035 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.343954972+02:00" level=info msg="ignoring event" container=d8dd3aaeb89747752ec93538bdb4e6a2c5a99a27d4feccf81454c4c16a5e48c3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.486828 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.343978469+02:00" level=info msg="ignoring event" container=f8ac6ad18c9296c758a95da14a0f442cbe3ba99e4ea79c06ee9520c11408b1b9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.487651 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.344012416+02:00" level=info msg="ignoring event" container=b7f335f10c042fbfe10f20d2ed12de77844cc9aef7d133ef5441f08216c78823 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.488509 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.344034571+02:00" level=info msg="ignoring event" container=17d1cff3b183a85687e94d350fb1ad2573019bf8e39a6269488a5c1006067dbf module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.489304 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.344065940+02:00" level=info msg="ignoring event" container=e4405ce259886f617105b85b98f3a92b045add97c235d676b164b58e6232d06b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.490045 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.344088984+02:00" level=info msg="ignoring event" container=fa5d9368522e80e4c07a7774c8d817ee11a1a413be82e47f1ec0d2d9dbb1ca0e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.492143 32984 out.go:129] Mar 24 20:33:59 ubuntu dockerd[8910]: time="2021-03-24T20:33:59.355041756+02:00" level=info msg="ignoring event" container=27aa539ac1993fdac5707811ce507cb12f25dc7a53a8225a8869f1eae2b5d9c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.493032 32984 out.go:129] Mar 24 20:34:41 ubuntu dockerd[8910]: time="2021-03-24T20:34:41.229138290+02:00" level=info msg="ignoring event" container=284b2121115eb24b1b224f71c3a7eca7cf10a073a8cf05452af22a3256eb3c31 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.493969 32984 out.go:129] Mar 24 20:34:41 ubuntu dockerd[8910]: time="2021-03-24T20:34:41.923094990+02:00" level=info msg="ignoring event" container=0142af4a7742603451fc0ea044ccf53fcd7c9dfbe7d0c7d951f21fcb53e5cd28 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.494978 32984 out.go:129] Mar 24 20:34:42 ubuntu dockerd[8910]: time="2021-03-24T20:34:42.095523410+02:00" level=info msg="ignoring event" container=9492cfe3b64f47d0cc1dcc9ec781f2b442d050e4548c3c781c46e5254926430f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.495729 32984 out.go:129] Mar 24 20:34:42 ubuntu dockerd[8910]: time="2021-03-24T20:34:42.269540835+02:00" level=info msg="ignoring event" container=2e27cf8c1f7318c37c07c26ba403d8fa7601a366feda14ee5518753eab89d01b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.497020 32984 out.go:129] Mar 24 20:34:42 ubuntu dockerd[8910]: time="2021-03-24T20:34:42.659734454+02:00" level=info msg="ignoring event" container=d14f2d972fc701f476e19cbeed1fd8a7262315a35422130ec48a1d08982ee6d2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.497974 32984 out.go:129] Mar 24 20:34:42 ubuntu dockerd[8910]: time="2021-03-24T20:34:42.837467274+02:00" level=info msg="ignoring event" container=0af84e444da27a23e9bb796893586b4c1575411d86a8057bcecc670ecae1c546 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.498840 32984 out.go:129] Mar 24 20:34:43 ubuntu dockerd[8910]: time="2021-03-24T20:34:43.015436018+02:00" level=info msg="ignoring event" container=b19f4903e48b5ff5c4a896667efacd1f53edea036fd58a6eff9f8403ce78451f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.500057 32984 out.go:129] Mar 24 20:34:43 ubuntu dockerd[8910]: time="2021-03-24T20:34:43.197173399+02:00" level=info msg="ignoring event" container=e29bb0bc72a986aded49145563dbc2a528595a4931bb62a840fcd8159f34884d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" I0324 20:37:30.501452 32984 out.go:129] Mar 24 20:35:48 ubuntu dockerd[8910]: time="2021-03-24T20:35:48.860937367+02:00" level=warning msg="Published ports are discarded when using host network mode" I0324 20:37:30.502660 32984 out.go:129] Mar 24 20:35:51 ubuntu dockerd[8910]: time="2021-03-24T20:35:51.032585582+02:00" level=warning msg="Published ports are discarded when using host network mode" I0324 20:37:30.503550 32984 out.go:129] I0324 20:37:30.504431 32984 out.go:129] ==> container status <== I0324 20:37:30.504477 32984 exec_runner.go:52] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0324 20:37:30.590000 32984 out.go:129] sudo: crictl: command not found I0324 20:37:30.591007 32984 out.go:129] CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES I0324 20:37:30.591920 32984 out.go:129] 696e909b7688 85069258b98a "/storage-provisioner" About a minute ago Up About a minute k8s_storage-provisioner_storage-provisioner_kube-system_b1a31819-050a-4a35-a815-91b404320d39_0 I0324 20:37:30.592837 32984 out.go:129] 1555bd76a7cb k8s.gcr.io/pause:3.2 "/pause" About a minute ago Up About a minute k8s_POD_storage-provisioner_kube-system_b1a31819-050a-4a35-a815-91b404320d39_0 I0324 20:37:30.593562 32984 out.go:129] 28d0774f2681 77f36e31bedf "/controller --port=…" About a minute ago Up About a minute k8s_controller_controller-7646d7cd58-4hscq_metallb-system_17404154-4924-4abd-baf3-6e74be215f11_0 I0324 20:37:30.594277 32984 out.go:129] 1cc69d6af318 bfe3a36ebd25 "/coredns -conf /etc…" About a minute ago Up About a minute k8s_coredns_coredns-74ff55c5b-q8fqk_kube-system_294883fd-93ae-4876-9334-264f2141d974_0 I0324 20:37:30.595147 32984 out.go:129] 0d761c150abd 4fa93685c115 "/speaker --port=747…" About a minute ago Up About a minute k8s_speaker_speaker-vt68t_metallb-system_44ce0d6e-6d27-493b-9089-26867b2efca3_0 I0324 20:37:30.595971 32984 out.go:129] d5b71ec9bb20 k8s.gcr.io/pause:3.2 "/pause" About a minute ago Up About a minute k8s_POD_controller-7646d7cd58-4hscq_metallb-system_17404154-4924-4abd-baf3-6e74be215f11_0 I0324 20:37:30.596893 32984 out.go:129] b4651c06f80e k8s.gcr.io/pause:3.2 "/pause" About a minute ago Up About a minute k8s_POD_coredns-74ff55c5b-q8fqk_kube-system_294883fd-93ae-4876-9334-264f2141d974_0 I0324 20:37:30.597617 32984 out.go:129] 954a17c3f5ef 43154ddb57a8 "/usr/local/bin/kube…" About a minute ago Up About a minute k8s_kube-proxy_kube-proxy-rc4s5_kube-system_322257a6-4656-4416-9907-600cff97668b_0 I0324 20:37:30.598533 32984 out.go:129] 95b89051def9 k8s.gcr.io/pause:3.2 "/pause" About a minute ago Up About a minute k8s_POD_speaker-vt68t_metallb-system_44ce0d6e-6d27-493b-9089-26867b2efca3_0 I0324 20:37:30.599278 32984 out.go:129] 07f7ff2febac k8s.gcr.io/pause:3.2 "/pause" About a minute ago Up About a minute k8s_POD_kube-proxy-rc4s5_kube-system_322257a6-4656-4416-9907-600cff97668b_0 I0324 20:37:30.600026 32984 out.go:129] 69b34f133437 a27166429d98 "kube-controller-man…" 2 minutes ago Up 2 minutes k8s_kube-controller-manager_kube-controller-manager-ubuntu_kube-system_932a378e61cc6a1200c450416ec5a096_0 I0324 20:37:30.600748 32984 out.go:129] ee9219711a8d a8c2fdb8bf76 "kube-apiserver --ad…" 2 minutes ago Up 2 minutes k8s_kube-apiserver_kube-apiserver-ubuntu_kube-system_b99608f5ec34b1ffda48abcbb9412760_0 I0324 20:37:30.601508 32984 out.go:129] 05b06e91f987 ed2c44fbdd78 "kube-scheduler --au…" 2 minutes ago Up 2 minutes k8s_kube-scheduler_kube-scheduler-ubuntu_kube-system_6b4a0ee8b3d15a1c2e47c15d32e6eb0d_0 I0324 20:37:30.602330 32984 out.go:129] 4c376b899994 0369cf4303ff "etcd --advertise-cl…" 2 minutes ago Up 2 minutes k8s_etcd_etcd-ubuntu_kube-system_4cc7de39985d1c5e97e3c5e69d9af462_0 I0324 20:37:30.603109 32984 out.go:129] af9f659eeaab k8s.gcr.io/pause:3.2 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-controller-manager-ubuntu_kube-system_932a378e61cc6a1200c450416ec5a096_0 I0324 20:37:30.604019 32984 out.go:129] 1f29cd867b59 k8s.gcr.io/pause:3.2 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-apiserver-ubuntu_kube-system_b99608f5ec34b1ffda48abcbb9412760_0 I0324 20:37:30.605055 32984 out.go:129] d90f8808a7b7 k8s.gcr.io/pause:3.2 "/pause" 2 minutes ago Up 2 minutes k8s_POD_etcd-ubuntu_kube-system_4cc7de39985d1c5e97e3c5e69d9af462_0 I0324 20:37:30.606381 32984 out.go:129] 0934ed916946 k8s.gcr.io/pause:3.2 "/pause" 2 minutes ago Up 2 minutes k8s_POD_kube-scheduler-ubuntu_kube-system_6b4a0ee8b3d15a1c2e47c15d32e6eb0d_0 I0324 20:37:30.607388 32984 out.go:129] 77648145a856 rabbitmq:3-management "docker-entrypoint.s…" 2 days ago Up 7 minutes 4369/tcp, 5671/tcp, 15671/tcp, 15691-15692/tcp, 25672/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:5555->15672/tcp rabbitmq I0324 20:37:30.608261 32984 out.go:129] I0324 20:37:30.609129 32984 out.go:129] ==> coredns [1cc69d6af318] <== I0324 20:37:30.609190 32984 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 60 1cc69d6af318" I0324 20:37:30.667981 32984 out.go:129] .:53 I0324 20:37:30.668887 32984 out.go:129] [INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7 I0324 20:37:30.669827 32984 out.go:129] CoreDNS-1.7.0 I0324 20:37:30.671049 32984 out.go:129] linux/amd64, go1.14.4, f59c03d I0324 20:37:30.671854 32984 out.go:129] I0324 20:37:30.672750 32984 out.go:129] ==> describe nodes <== I0324 20:37:30.672797 32984 exec_runner.go:52] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0324 20:37:30.824018 32984 out.go:129] Name: ubuntu I0324 20:37:30.825080 32984 out.go:129] Roles: control-plane,master I0324 20:37:30.826135 32984 out.go:129] Labels: beta.kubernetes.io/arch=amd64 I0324 20:37:30.827253 32984 out.go:129] beta.kubernetes.io/os=linux I0324 20:37:30.828184 32984 out.go:129] kubernetes.io/arch=amd64 I0324 20:37:30.829150 32984 out.go:129] kubernetes.io/hostname=ubuntu I0324 20:37:30.829943 32984 out.go:129] kubernetes.io/os=linux I0324 20:37:30.830766 32984 out.go:129] minikube.k8s.io/commit=09ee84d530de4a92f00f1c5dbc34cead092b95bc I0324 20:37:30.831482 32984 out.go:129] minikube.k8s.io/name=minikube I0324 20:37:30.832193 32984 out.go:129] minikube.k8s.io/updated_at=2021_03_24T20_35_33_0700 I0324 20:37:30.832959 32984 out.go:129] minikube.k8s.io/version=v1.18.1 I0324 20:37:30.833696 32984 out.go:129] node-role.kubernetes.io/control-plane= I0324 20:37:30.834446 32984 out.go:129] node-role.kubernetes.io/master= I0324 20:37:30.835153 32984 out.go:129] Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock I0324 20:37:30.835870 32984 out.go:129] node.alpha.kubernetes.io/ttl: 0 I0324 20:37:30.836555 32984 out.go:129] volumes.kubernetes.io/controller-managed-attach-detach: true I0324 20:37:30.837265 32984 out.go:129] CreationTimestamp: Wed, 24 Mar 2021 20:35:29 +0200 I0324 20:37:30.838009 32984 out.go:129] Taints: I0324 20:37:30.838752 32984 out.go:129] Unschedulable: false I0324 20:37:30.839502 32984 out.go:129] Lease: I0324 20:37:30.840197 32984 out.go:129] HolderIdentity: ubuntu I0324 20:37:30.840932 32984 out.go:129] AcquireTime: I0324 20:37:30.841683 32984 out.go:129] RenewTime: Wed, 24 Mar 2021 20:37:29 +0200 I0324 20:37:30.842395 32984 out.go:129] Conditions: I0324 20:37:30.843151 32984 out.go:129] Type Status LastHeartbeatTime LastTransitionTime Reason Message I0324 20:37:30.843846 32984 out.go:129] ---- ------ ----------------- ------------------ ------ ------- I0324 20:37:30.844529 32984 out.go:129] MemoryPressure False Wed, 24 Mar 2021 20:35:48 +0200 Wed, 24 Mar 2021 20:35:28 +0200 KubeletHasSufficientMemory kubelet has sufficient memory available I0324 20:37:30.845342 32984 out.go:129] DiskPressure False Wed, 24 Mar 2021 20:35:48 +0200 Wed, 24 Mar 2021 20:35:28 +0200 KubeletHasNoDiskPressure kubelet has no disk pressure I0324 20:37:30.846552 32984 out.go:129] PIDPressure False Wed, 24 Mar 2021 20:35:48 +0200 Wed, 24 Mar 2021 20:35:28 +0200 KubeletHasSufficientPID kubelet has sufficient PID available I0324 20:37:30.847339 32984 out.go:129] Ready True Wed, 24 Mar 2021 20:35:48 +0200 Wed, 24 Mar 2021 20:35:48 +0200 KubeletReady kubelet is posting ready status. AppArmor enabled I0324 20:37:30.848070 32984 out.go:129] Addresses: I0324 20:37:30.848866 32984 out.go:129] InternalIP: 192.168.1.6 I0324 20:37:30.849843 32984 out.go:129] Hostname: ubuntu I0324 20:37:30.850744 32984 out.go:129] Capacity: I0324 20:37:30.851637 32984 out.go:129] cpu: 8 I0324 20:37:30.852474 32984 out.go:129] ephemeral-storage: 153250288Ki I0324 20:37:30.853249 32984 out.go:129] hugepages-1Gi: 0 I0324 20:37:30.854102 32984 out.go:129] hugepages-2Mi: 0 I0324 20:37:30.854919 32984 out.go:129] memory: 20472740Ki I0324 20:37:30.855715 32984 out.go:129] pods: 1k I0324 20:37:30.856703 32984 out.go:129] Allocatable: I0324 20:37:30.857484 32984 out.go:129] cpu: 8 I0324 20:37:30.858324 32984 out.go:129] ephemeral-storage: 153250288Ki I0324 20:37:30.859155 32984 out.go:129] hugepages-1Gi: 0 I0324 20:37:30.860198 32984 out.go:129] hugepages-2Mi: 0 I0324 20:37:30.861048 32984 out.go:129] memory: 20472740Ki I0324 20:37:30.861912 32984 out.go:129] pods: 1k I0324 20:37:30.862967 32984 out.go:129] System Info: I0324 20:37:30.863949 32984 out.go:129] Machine ID: fa9a0e3e3eef4ba4b09a255381cd0cda I0324 20:37:30.864858 32984 out.go:129] System UUID: 7c5f4d56-adf4-6fee-22cf-72da73ac621b I0324 20:37:30.865702 32984 out.go:129] Boot ID: 983b01e8-5084-4d48-a687-19e792a4057c I0324 20:37:30.866662 32984 out.go:129] Kernel Version: 5.8.0-45-generic I0324 20:37:30.867629 32984 out.go:129] OS Image: Ubuntu 20.04.2 LTS I0324 20:37:30.868946 32984 out.go:129] Operating System: linux I0324 20:37:30.870177 32984 out.go:129] Architecture: amd64 I0324 20:37:30.871345 32984 out.go:129] Container Runtime Version: docker://20.10.5 I0324 20:37:30.872269 32984 out.go:129] Kubelet Version: v1.20.2 I0324 20:37:30.873063 32984 out.go:129] Kube-Proxy Version: v1.20.2 I0324 20:37:30.873884 32984 out.go:129] PodCIDR: 10.244.0.0/24 I0324 20:37:30.874992 32984 out.go:129] PodCIDRs: 10.244.0.0/24 I0324 20:37:30.875905 32984 out.go:129] Non-terminated Pods: (9 in total) I0324 20:37:30.876684 32984 out.go:129] Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE I0324 20:37:30.877419 32984 out.go:129] --------- ---- ------------ ---------- --------------- ------------- --- I0324 20:37:30.878265 32984 out.go:129] kube-system coredns-74ff55c5b-q8fqk 100m (1%) 0 (0%) 70Mi (0%) 170Mi (0%) 102s I0324 20:37:30.879101 32984 out.go:129] kube-system etcd-ubuntu 100m (1%) 0 (0%) 100Mi (0%) 0 (0%) 111s I0324 20:37:30.879853 32984 out.go:129] kube-system kube-apiserver-ubuntu 250m (3%) 0 (0%) 0 (0%) 0 (0%) 111s I0324 20:37:30.880578 32984 out.go:129] kube-system kube-controller-manager-ubuntu 200m (2%) 0 (0%) 0 (0%) 0 (0%) 111s I0324 20:37:30.881274 32984 out.go:129] kube-system kube-proxy-rc4s5 0 (0%) 0 (0%) 0 (0%) 0 (0%) 102s I0324 20:37:30.882061 32984 out.go:129] kube-system kube-scheduler-ubuntu 100m (1%) 0 (0%) 0 (0%) 0 (0%) 111s I0324 20:37:30.883061 32984 out.go:129] kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 116s I0324 20:37:30.884030 32984 out.go:129] metallb-system controller-7646d7cd58-4hscq 100m (1%) 100m (1%) 100Mi (0%) 100Mi (0%) 102s I0324 20:37:30.884882 32984 out.go:129] metallb-system speaker-vt68t 100m (1%) 100m (1%) 100Mi (0%) 100Mi (0%) 102s I0324 20:37:30.885743 32984 out.go:129] Allocated resources: I0324 20:37:30.886714 32984 out.go:129] (Total limits may be over 100 percent, i.e., overcommitted.) I0324 20:37:30.887542 32984 out.go:129] Resource Requests Limits I0324 20:37:30.888478 32984 out.go:129] -------- -------- ------ I0324 20:37:30.889321 32984 out.go:129] cpu 950m (11%) 200m (2%) I0324 20:37:30.890285 32984 out.go:129] memory 370Mi (1%) 370Mi (1%) I0324 20:37:30.891164 32984 out.go:129] ephemeral-storage 100Mi (0%) 0 (0%) I0324 20:37:30.891965 32984 out.go:129] hugepages-1Gi 0 (0%) 0 (0%) I0324 20:37:30.892618 32984 out.go:129] hugepages-2Mi 0 (0%) 0 (0%) I0324 20:37:30.893328 32984 out.go:129] Events: I0324 20:37:30.894038 32984 out.go:129] Type Reason Age From Message I0324 20:37:30.894860 32984 out.go:129] ---- ------ ---- ---- ------- I0324 20:37:30.895647 32984 out.go:129] Normal Starting 2m36s kubelet Starting kubelet. I0324 20:37:30.896564 32984 out.go:129] Normal NodeHasSufficientMemory 2m36s (x4 over 2m36s) kubelet Node ubuntu status is now: NodeHasSufficientMemory I0324 20:37:30.897590 32984 out.go:129] Normal NodeHasNoDiskPressure 2m36s (x4 over 2m36s) kubelet Node ubuntu status is now: NodeHasNoDiskPressure I0324 20:37:30.898420 32984 out.go:129] Normal NodeHasSufficientPID 2m36s (x3 over 2m36s) kubelet Node ubuntu status is now: NodeHasSufficientPID I0324 20:37:30.899472 32984 out.go:129] Normal NodeAllocatableEnforced 2m36s kubelet Updated Node Allocatable limit across pods I0324 20:37:30.900731 32984 out.go:129] Normal Starting 111s kubelet Starting kubelet. I0324 20:37:30.901844 32984 out.go:129] Normal NodeHasSufficientMemory 111s kubelet Node ubuntu status is now: NodeHasSufficientMemory I0324 20:37:30.903052 32984 out.go:129] Normal NodeHasNoDiskPressure 111s kubelet Node ubuntu status is now: NodeHasNoDiskPressure I0324 20:37:30.904224 32984 out.go:129] Normal NodeHasSufficientPID 111s kubelet Node ubuntu status is now: NodeHasSufficientPID I0324 20:37:30.905252 32984 out.go:129] Normal NodeAllocatableEnforced 111s kubelet Updated Node Allocatable limit across pods I0324 20:37:30.906423 32984 out.go:129] Normal NodeReady 102s kubelet Node ubuntu status is now: NodeReady I0324 20:37:30.907477 32984 out.go:129] Normal Starting 99s kube-proxy Starting kube-proxy. I0324 20:37:30.908350 32984 out.go:129] I0324 20:37:30.909214 32984 out.go:129] ==> dmesg <== I0324 20:37:30.909270 32984 exec_runner.go:52] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 60" I0324 20:37:30.926281 32984 out.go:129] [Mar24 20:27] [Firmware Bug]: the BIOS has corrupted hw-PMU resources (MSR 189 is 4000ff) I0324 20:37:30.927835 32984 out.go:129] [ +0.000334] Intel PMU driver. I0324 20:37:30.929411 32984 out.go:129] [ +0.031041] #2 I0324 20:37:30.930598 32984 out.go:129] [ +0.003718] #3 I0324 20:37:30.931659 32984 out.go:129] [ +0.003304] #4 I0324 20:37:30.932729 32984 out.go:129] [ +0.003963] #5 I0324 20:37:30.933586 32984 out.go:129] [ +0.004102] #6 I0324 20:37:30.934493 32984 out.go:129] [ +0.003391] #7 I0324 20:37:30.935317 32984 out.go:129] [ +0.070189] pmd_set_huge: Cannot satisfy [mem 0xf0000000-0xf0200000] with a huge-page mapping due to MTRR override. I0324 20:37:30.936104 32984 out.go:129] [ +2.688285] platform eisa.0: EISA: Cannot allocate resource for mainboard I0324 20:37:30.936959 32984 out.go:129] [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 1 I0324 20:37:30.937735 32984 out.go:129] [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 2 I0324 20:37:30.938504 32984 out.go:129] [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 3 I0324 20:37:30.939337 32984 out.go:129] [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 4 I0324 20:37:30.940470 32984 out.go:129] [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 5 I0324 20:37:30.941369 32984 out.go:129] [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 6 I0324 20:37:30.942552 32984 out.go:129] [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 7 I0324 20:37:30.943584 32984 out.go:129] [ +0.000001] platform eisa.0: Cannot allocate resource for EISA slot 8 I0324 20:37:30.944637 32984 out.go:129] [ +0.848470] piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! I0324 20:37:30.945595 32984 out.go:129] [ +1.348218] Started bpfilter I0324 20:37:30.946562 32984 out.go:129] [ +7.412772] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0x00 TTL=1 ID=0 DF PROTO=2 I0324 20:37:30.947414 32984 out.go:129] [ +0.241575] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:b4:69:21:9d:e7:25:08:00 SRC=192.168.1.7 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=61621 PROTO=2 I0324 20:37:30.948271 32984 out.go:129] [ +0.678180] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0x00 TTL=1 ID=0 DF PROTO=2 I0324 20:37:30.949085 32984 out.go:129] [ +0.322250] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:b4:69:21:9d:e7:25:08:00 SRC=192.168.1.7 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=61622 PROTO=2 I0324 20:37:30.950083 32984 out.go:129] [Mar24 20:28] kauditd_printk_skb: 28 callbacks suppressed I0324 20:37:30.951180 32984 out.go:129] [Mar24 20:29] nvme nvme0: I/O 46 QID 1 timeout, completion polled I0324 20:37:30.952085 32984 out.go:129] [ +0.000052] nvme nvme0: I/O 164 QID 4 timeout, aborting I0324 20:37:30.952928 32984 out.go:129] [ +0.000142] nvme nvme0: Abort status: 0x0 I0324 20:37:30.953758 32984 out.go:129] [ +12.072120] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality. I0324 20:37:30.954620 32984 out.go:129] [ +17.489176] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0xC0 TTL=1 ID=34466 PROTO=2 I0324 20:37:30.955539 32984 out.go:129] [ +1.511615] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:b4:69:21:9d:e7:25:08:00 SRC=192.168.1.7 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=61623 PROTO=2 I0324 20:37:30.956427 32984 out.go:129] [ +19.787884] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0x00 TTL=1 ID=0 DF PROTO=2 I0324 20:37:30.957578 32984 out.go:129] [ +0.102200] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:26:f5:a2:d0:12:ee:08:00 SRC=192.168.1.3 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 I0324 20:37:30.958733 32984 out.go:129] [Mar24 20:30] nvme nvme0: I/O 216 QID 1 timeout, completion polled I0324 20:37:30.960070 32984 out.go:129] [Mar24 20:31] nvme nvme0: I/O 83 QID 3 timeout, completion polled I0324 20:37:30.960904 32984 out.go:129] [ +0.000043] nvme nvme0: I/O 56 QID 4 timeout, completion polled I0324 20:37:30.961969 32984 out.go:129] [ +11.234838] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0xC0 TTL=1 ID=34469 PROTO=2 I0324 20:37:30.962777 32984 out.go:129] [ +6.553221] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:26:f5:a2:d0:12:ee:08:00 SRC=192.168.1.3 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 I0324 20:37:30.963648 32984 out.go:129] [Mar24 20:32] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0x00 TTL=1 ID=0 DF PROTO=2 I0324 20:37:30.964531 32984 out.go:129] [ +0.284230] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:b4:69:21:9d:e7:25:08:00 SRC=192.168.1.7 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=61628 PROTO=2 I0324 20:37:30.965495 32984 out.go:129] [ +0.024713] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:26:f5:a2:d0:12:ee:08:00 SRC=192.168.1.3 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0xC0 TTL=1 ID=0 DF PROTO=2 I0324 20:37:30.966450 32984 out.go:129] [ +6.172867] nvme nvme0: I/O 70 QID 3 timeout, aborting I0324 20:37:30.967332 32984 out.go:129] [ +0.000235] nvme nvme0: Abort status: 0x0 I0324 20:37:30.968206 32984 out.go:129] [ +48.125285] nvme nvme0: I/O 68 QID 7 timeout, completion polled I0324 20:37:30.969067 32984 out.go:129] [Mar24 20:33] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0xC0 TTL=1 ID=34472 PROTO=2 I0324 20:37:30.969893 32984 out.go:129] [ +0.264116] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:b4:69:21:9d:e7:25:08:00 SRC=192.168.1.7 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=61629 PROTO=2 I0324 20:37:30.970890 32984 out.go:129] [Mar24 20:34] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0x00 TTL=1 ID=0 DF PROTO=2 I0324 20:37:30.971822 32984 out.go:129] [ +0.253146] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:b4:69:21:9d:e7:25:08:00 SRC=192.168.1.7 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=61630 PROTO=2 I0324 20:37:30.972655 32984 out.go:129] [ +24.550175] nvme nvme0: I/O 100 QID 2 timeout, completion polled I0324 20:37:30.973569 32984 out.go:129] [Mar24 20:35] nvme nvme0: I/O 105 QID 2 timeout, completion polled I0324 20:37:30.974532 32984 out.go:129] [ +26.450239] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0xC0 TTL=1 ID=34475 PROTO=2 I0324 20:37:30.975291 32984 out.go:129] [ +5.534665] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:b4:69:21:9d:e7:25:08:00 SRC=192.168.1.7 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=61631 PROTO=2 I0324 20:37:30.976013 32984 out.go:129] [Mar24 20:36] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:01:14:91:82:38:47:ba:08:00 SRC=192.168.1.1 DST=224.0.0.1 LEN=28 TOS=0x00 PREC=0x00 TTL=1 ID=0 DF PROTO=2 I0324 20:37:30.976741 32984 out.go:129] [ +0.224513] [UFW BLOCK] IN=ens33 OUT= MAC=01:00:5e:00:00:fb:b4:69:21:9d:e7:25:08:00 SRC=192.168.1.7 DST=224.0.0.251 LEN=32 TOS=0x00 PREC=0x00 TTL=1 ID=61632 PROTO=2 I0324 20:37:30.977405 32984 out.go:129] I0324 20:37:30.978232 32984 out.go:129] ==> etcd [4c376b899994] <== I0324 20:37:30.978294 32984 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 60 4c376b899994" I0324 20:37:31.043495 32984 out.go:129] [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead I0324 20:37:31.044499 32984 out.go:129] 2021-03-24 18:35:25.558347 I | etcdmain: etcd Version: 3.4.13 I0324 20:37:31.045549 32984 out.go:129] 2021-03-24 18:35:25.558408 I | etcdmain: Git SHA: ae9734ed2 I0324 20:37:31.046775 32984 out.go:129] 2021-03-24 18:35:25.558414 I | etcdmain: Go Version: go1.12.17 I0324 20:37:31.047768 32984 out.go:129] 2021-03-24 18:35:25.558418 I | etcdmain: Go OS/Arch: linux/amd64 I0324 20:37:31.048967 32984 out.go:129] 2021-03-24 18:35:25.558423 I | etcdmain: setting maximum number of CPUs to 8, total number of available CPUs is 8 I0324 20:37:31.050351 32984 out.go:129] [WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead I0324 20:37:31.051336 32984 out.go:129] 2021-03-24 18:35:25.558528 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = I0324 20:37:31.052271 32984 out.go:129] 2021-03-24 18:35:25.559526 I | embed: name = ubuntu I0324 20:37:31.053252 32984 out.go:129] 2021-03-24 18:35:25.559570 I | embed: data dir = /var/lib/minikube/etcd I0324 20:37:31.054109 32984 out.go:129] 2021-03-24 18:35:25.559579 I | embed: member dir = /var/lib/minikube/etcd/member I0324 20:37:31.054921 32984 out.go:129] 2021-03-24 18:35:25.559584 I | embed: heartbeat = 100ms I0324 20:37:31.055718 32984 out.go:129] 2021-03-24 18:35:25.559589 I | embed: election = 1000ms I0324 20:37:31.056437 32984 out.go:129] 2021-03-24 18:35:25.559594 I | embed: snapshot count = 10000 I0324 20:37:31.057515 32984 out.go:129] 2021-03-24 18:35:25.559617 I | embed: advertise client URLs = https://192.168.1.6:2379 I0324 20:37:31.058384 32984 out.go:129] 2021-03-24 18:35:25.571969 I | etcdserver: starting member cd7087fb01a38621 in cluster 64f9292001ec2152 I0324 20:37:31.059112 32984 out.go:129] raft2021/03/24 18:35:25 INFO: cd7087fb01a38621 switched to configuration voters=() I0324 20:37:31.059878 32984 out.go:129] raft2021/03/24 18:35:25 INFO: cd7087fb01a38621 became follower at term 0 I0324 20:37:31.060583 32984 out.go:129] raft2021/03/24 18:35:25 INFO: newRaft cd7087fb01a38621 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0] I0324 20:37:31.061322 32984 out.go:129] raft2021/03/24 18:35:25 INFO: cd7087fb01a38621 became follower at term 1 I0324 20:37:31.062141 32984 out.go:129] raft2021/03/24 18:35:25 INFO: cd7087fb01a38621 switched to configuration voters=(14803481487300855329) I0324 20:37:31.063110 32984 out.go:129] 2021-03-24 18:35:25.574709 W | auth: simple token is not cryptographically signed I0324 20:37:31.064103 32984 out.go:129] 2021-03-24 18:35:25.578321 I | etcdserver: starting server... [version: 3.4.13, cluster version: to_be_decided] I0324 20:37:31.064905 32984 out.go:129] 2021-03-24 18:35:25.579340 I | etcdserver: cd7087fb01a38621 as single-node; fast-forwarding 9 ticks (election ticks 10) I0324 20:37:31.065644 32984 out.go:129] raft2021/03/24 18:35:25 INFO: cd7087fb01a38621 switched to configuration voters=(14803481487300855329) I0324 20:37:31.066489 32984 out.go:129] 2021-03-24 18:35:25.581194 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = I0324 20:37:31.067351 32984 out.go:129] 2021-03-24 18:35:25.581295 I | etcdserver/membership: added member cd7087fb01a38621 [https://192.168.1.6:2380] to cluster 64f9292001ec2152 I0324 20:37:31.068428 32984 out.go:129] 2021-03-24 18:35:25.581329 I | embed: listening for peers on 192.168.1.6:2380 I0324 20:37:31.069433 32984 out.go:129] 2021-03-24 18:35:25.581562 I | embed: listening for metrics on http://127.0.0.1:2381 I0324 20:37:31.070437 32984 out.go:129] raft2021/03/24 18:35:26 INFO: cd7087fb01a38621 is starting a new election at term 1 I0324 20:37:31.071555 32984 out.go:129] raft2021/03/24 18:35:26 INFO: cd7087fb01a38621 became candidate at term 2 I0324 20:37:31.072530 32984 out.go:129] raft2021/03/24 18:35:26 INFO: cd7087fb01a38621 received MsgVoteResp from cd7087fb01a38621 at term 2 I0324 20:37:31.073457 32984 out.go:129] raft2021/03/24 18:35:26 INFO: cd7087fb01a38621 became leader at term 2 I0324 20:37:31.074425 32984 out.go:129] raft2021/03/24 18:35:26 INFO: raft.node: cd7087fb01a38621 elected leader cd7087fb01a38621 at term 2 I0324 20:37:31.075196 32984 out.go:129] 2021-03-24 18:35:26.174825 I | etcdserver: published {Name:ubuntu ClientURLs:[https://192.168.1.6:2379]} to cluster 64f9292001ec2152 I0324 20:37:31.076074 32984 out.go:129] 2021-03-24 18:35:26.174875 I | embed: ready to serve client requests I0324 20:37:31.076866 32984 out.go:129] 2021-03-24 18:35:26.174923 I | etcdserver: setting up the initial cluster version to 3.4 I0324 20:37:31.077576 32984 out.go:129] 2021-03-24 18:35:26.174967 I | embed: ready to serve client requests I0324 20:37:31.078385 32984 out.go:129] 2021-03-24 18:35:26.175803 N | etcdserver/membership: set the initial cluster version to 3.4 I0324 20:37:31.079210 32984 out.go:129] 2021-03-24 18:35:26.176150 I | etcdserver/api: enabled capabilities for version 3.4 I0324 20:37:31.080107 32984 out.go:129] 2021-03-24 18:35:26.177466 I | embed: serving client requests on 127.0.0.1:2379 I0324 20:37:31.081072 32984 out.go:129] 2021-03-24 18:35:26.178560 I | embed: serving client requests on 192.168.1.6:2379 I0324 20:37:31.081890 32984 out.go:129] 2021-03-24 18:35:44.101058 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.082897 32984 out.go:129] 2021-03-24 18:35:47.428203 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.083632 32984 out.go:129] 2021-03-24 18:35:57.428695 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.084994 32984 out.go:129] 2021-03-24 18:36:07.428217 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.086170 32984 out.go:129] 2021-03-24 18:36:17.428662 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.087168 32984 out.go:129] 2021-03-24 18:36:27.428672 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.088162 32984 out.go:129] 2021-03-24 18:36:37.428363 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.089331 32984 out.go:129] 2021-03-24 18:36:47.428832 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.090650 32984 out.go:129] 2021-03-24 18:36:57.428431 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.091831 32984 out.go:129] 2021-03-24 18:37:07.429003 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.092802 32984 out.go:129] 2021-03-24 18:37:17.428133 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.093672 32984 out.go:129] 2021-03-24 18:37:27.428620 I | etcdserver/api/etcdhttp: /health OK (status code 200) I0324 20:37:31.098280 32984 out.go:129] I0324 20:37:31.099325 32984 out.go:129] ==> kernel <== I0324 20:37:31.099391 32984 exec_runner.go:52] Run: /bin/bash -c "uptime && uname -a && grep PRETTY /etc/os-release" I0324 20:37:31.111031 32984 out.go:129] 20:37:31 up 9 min, 1 user, load average: 3.38, 12.07, 7.66 I0324 20:37:31.112820 32984 out.go:129] Linux ubuntu 5.8.0-45-generic #51~20.04.1-Ubuntu SMP Tue Feb 23 13:46:31 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux I0324 20:37:31.113961 32984 out.go:129] PRETTY_NAME="Ubuntu 20.04.2 LTS" I0324 20:37:31.114931 32984 out.go:129] I0324 20:37:31.115853 32984 out.go:129] ==> kube-apiserver [ee9219711a8d] <== I0324 20:37:31.115913 32984 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 60 ee9219711a8d" I0324 20:37:31.181921 32984 out.go:129] I0324 18:35:27.546917 1 client.go:360] parsed scheme: "endpoint" I0324 20:37:31.183436 32984 out.go:129] I0324 18:35:27.546986 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }] I0324 20:37:31.184525 32984 out.go:129] I0324 18:35:29.823132 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0324 20:37:31.185525 32984 out.go:129] I0324 18:35:29.823172 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0324 20:37:31.186405 32984 out.go:129] I0324 18:35:29.823320 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key I0324 20:37:31.187290 32984 out.go:129] I0324 18:35:29.823821 1 secure_serving.go:197] Serving securely on [::]:8443 I0324 20:37:31.188300 32984 out.go:129] I0324 18:35:29.823844 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0324 20:37:31.189162 32984 out.go:129] I0324 18:35:29.823904 1 dynamic_serving_content.go:130] Starting aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key I0324 20:37:31.189963 32984 out.go:129] I0324 18:35:29.823989 1 autoregister_controller.go:141] Starting autoregister controller I0324 20:37:31.190814 32984 out.go:129] I0324 18:35:29.824002 1 cache.go:32] Waiting for caches to sync for autoregister controller I0324 20:37:31.191696 32984 out.go:129] I0324 18:35:29.824430 1 apf_controller.go:261] Starting API Priority and Fairness config controller I0324 20:37:31.192605 32984 out.go:129] I0324 18:35:29.824479 1 available_controller.go:475] Starting AvailableConditionController I0324 20:37:31.193338 32984 out.go:129] I0324 18:35:29.824490 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller I0324 20:37:31.195143 32984 out.go:129] I0324 18:35:29.825746 1 customresource_discovery_controller.go:209] Starting DiscoveryController I0324 20:37:31.196655 32984 out.go:129] E0324 18:35:29.826094 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.1.6, ResourceVersion: 0, AdditionalErrorMsg: I0324 20:37:31.197550 32984 out.go:129] I0324 18:35:29.826194 1 apiservice_controller.go:97] Starting APIServiceRegistrationController I0324 20:37:31.198966 32984 out.go:129] I0324 18:35:29.826617 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller I0324 20:37:31.199991 32984 out.go:129] I0324 18:35:29.826717 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller I0324 20:37:31.200854 32984 out.go:129] I0324 18:35:29.826779 1 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller I0324 20:37:31.201745 32984 out.go:129] I0324 18:35:29.826817 1 controller.go:83] Starting OpenAPI AggregationController I0324 20:37:31.202638 32984 out.go:129] I0324 18:35:29.826870 1 crdregistration_controller.go:111] Starting crd-autoregister controller I0324 20:37:31.203424 32984 out.go:129] I0324 18:35:29.826900 1 shared_informer.go:240] Waiting for caches to sync for crd-autoregister I0324 20:37:31.204232 32984 out.go:129] I0324 18:35:29.827300 1 controller.go:86] Starting OpenAPI controller I0324 20:37:31.205192 32984 out.go:129] I0324 18:35:29.827366 1 naming_controller.go:291] Starting NamingConditionController I0324 20:37:31.206134 32984 out.go:129] I0324 18:35:29.827388 1 establishing_controller.go:76] Starting EstablishingController I0324 20:37:31.207044 32984 out.go:129] I0324 18:35:29.827406 1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController I0324 20:37:31.207815 32984 out.go:129] I0324 18:35:29.827458 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController I0324 20:37:31.208558 32984 out.go:129] I0324 18:35:29.827502 1 crd_finalizer.go:266] Starting CRDFinalizer I0324 20:37:31.209287 32984 out.go:129] I0324 18:35:29.827548 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt I0324 20:37:31.209962 32984 out.go:129] I0324 18:35:29.827598 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt I0324 20:37:31.210746 32984 out.go:129] I0324 18:35:29.849380 1 shared_informer.go:247] Caches are synced for node_authorizer I0324 20:37:31.211529 32984 out.go:129] I0324 18:35:29.866248 1 controller.go:609] quota admission added evaluator for: namespaces I0324 20:37:31.212501 32984 out.go:129] I0324 18:35:29.924396 1 cache.go:39] Caches are synced for autoregister controller I0324 20:37:31.213397 32984 out.go:129] I0324 18:35:29.924535 1 apf_controller.go:266] Running API Priority and Fairness config worker I0324 20:37:31.214357 32984 out.go:129] I0324 18:35:29.924553 1 cache.go:39] Caches are synced for AvailableConditionController controller I0324 20:37:31.215294 32984 out.go:129] I0324 18:35:29.926918 1 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller I0324 20:37:31.216102 32984 out.go:129] I0324 18:35:29.926939 1 shared_informer.go:247] Caches are synced for crd-autoregister I0324 20:37:31.216970 32984 out.go:129] I0324 18:35:29.927696 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0324 20:37:31.217732 32984 out.go:129] I0324 18:35:30.823706 1 controller.go:132] OpenAPI AggregationController: action for item : Nothing (removed from the queue). I0324 20:37:31.218529 32984 out.go:129] I0324 18:35:30.823895 1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue). I0324 20:37:31.219299 32984 out.go:129] I0324 18:35:30.831501 1 storage_scheduling.go:132] created PriorityClass system-node-critical with value 2000001000 I0324 20:37:31.220012 32984 out.go:129] I0324 18:35:30.835813 1 storage_scheduling.go:132] created PriorityClass system-cluster-critical with value 2000000000 I0324 20:37:31.220763 32984 out.go:129] I0324 18:35:30.835872 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist. I0324 20:37:31.221519 32984 out.go:129] I0324 18:35:31.306952 1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0324 20:37:31.222361 32984 out.go:129] I0324 18:35:31.355045 1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0324 20:37:31.223344 32984 out.go:129] W0324 18:35:31.488772 1 lease.go:233] Resetting endpoints for master service "kubernetes" to [192.168.1.6] I0324 20:37:31.224085 32984 out.go:129] I0324 18:35:31.490154 1 controller.go:609] quota admission added evaluator for: endpoints I0324 20:37:31.224832 32984 out.go:129] I0324 18:35:31.496629 1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io I0324 20:37:31.225519 32984 out.go:129] I0324 18:35:32.321516 1 controller.go:609] quota admission added evaluator for: serviceaccounts I0324 20:37:31.226419 32984 out.go:129] I0324 18:35:33.015459 1 controller.go:609] quota admission added evaluator for: deployments.apps I0324 20:37:31.227310 32984 out.go:129] I0324 18:35:33.053595 1 controller.go:609] quota admission added evaluator for: daemonsets.apps I0324 20:37:31.228224 32984 out.go:129] I0324 18:35:39.591286 1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io I0324 20:37:31.229173 32984 out.go:129] I0324 18:35:48.375866 1 controller.go:609] quota admission added evaluator for: replicasets.apps I0324 20:37:31.230266 32984 out.go:129] I0324 18:35:48.377487 1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps I0324 20:37:31.231354 32984 out.go:129] I0324 18:36:05.651623 1 client.go:360] parsed scheme: "passthrough" I0324 20:37:31.233032 32984 out.go:129] I0324 18:36:05.651697 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0324 20:37:31.234286 32984 out.go:129] I0324 18:36:05.651707 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0324 20:37:31.235360 32984 out.go:129] I0324 18:36:47.837670 1 client.go:360] parsed scheme: "passthrough" I0324 20:37:31.236252 32984 out.go:129] I0324 18:36:47.837717 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] } I0324 20:37:31.237793 32984 out.go:129] I0324 18:36:47.837725 1 clientconn.go:948] ClientConn switching balancer to "pick_first" I0324 20:37:31.239141 32984 out.go:129] I0324 20:37:31.240169 32984 out.go:129] ==> kube-controller-manager [69b34f133437] <== I0324 20:37:31.240272 32984 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 60 69b34f133437" I0324 20:37:31.302901 32984 out.go:129] E0324 18:35:48.271403 1 core.go:232] failed to start cloud node lifecycle controller: no cloud provider provided I0324 20:37:31.303993 32984 out.go:129] W0324 18:35:48.271415 1 controllermanager.go:546] Skipping "cloud-node-lifecycle" I0324 20:37:31.304864 32984 out.go:129] I0324 18:35:48.271652 1 shared_informer.go:240] Waiting for caches to sync for resource quota I0324 20:37:31.305642 32984 out.go:129] W0324 18:35:48.277543 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="ubuntu" does not exist I0324 20:37:31.306507 32984 out.go:129] I0324 18:35:48.321484 1 shared_informer.go:247] Caches are synced for endpoint_slice_mirroring I0324 20:37:31.307304 32984 out.go:129] I0324 18:35:48.321530 1 shared_informer.go:247] Caches are synced for PV protection I0324 20:37:31.308051 32984 out.go:129] I0324 18:35:48.321548 1 shared_informer.go:247] Caches are synced for crt configmap I0324 20:37:31.308845 32984 out.go:129] I0324 18:35:48.322122 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0324 20:37:31.309552 32984 out.go:129] I0324 18:35:48.323029 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kube-apiserver-client I0324 20:37:31.310342 32984 out.go:129] I0324 18:35:48.323065 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-serving I0324 20:37:31.311099 32984 out.go:129] I0324 18:35:48.323467 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-legacy-unknown I0324 20:37:31.311818 32984 out.go:129] I0324 18:35:48.323567 1 shared_informer.go:247] Caches are synced for certificate-csrsigning-kubelet-client I0324 20:37:31.312510 32984 out.go:129] I0324 18:35:48.327572 1 shared_informer.go:247] Caches are synced for TTL I0324 20:37:31.313201 32984 out.go:129] I0324 18:35:48.345831 1 shared_informer.go:247] Caches are synced for attach detach I0324 20:37:31.313905 32984 out.go:129] I0324 18:35:48.345882 1 shared_informer.go:247] Caches are synced for endpoint_slice I0324 20:37:31.314655 32984 out.go:129] I0324 18:35:48.368928 1 shared_informer.go:247] Caches are synced for node I0324 20:37:31.315360 32984 out.go:129] I0324 18:35:48.368980 1 range_allocator.go:172] Starting range CIDR allocator I0324 20:37:31.316107 32984 out.go:129] I0324 18:35:48.368987 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0324 20:37:31.316927 32984 out.go:129] I0324 18:35:48.368991 1 shared_informer.go:247] Caches are synced for cidrallocator I0324 20:37:31.317825 32984 out.go:129] I0324 18:35:48.370917 1 shared_informer.go:247] Caches are synced for HPA I0324 20:37:31.318689 32984 out.go:129] I0324 18:35:48.371924 1 shared_informer.go:247] Caches are synced for service account I0324 20:37:31.319424 32984 out.go:129] I0324 18:35:48.372011 1 shared_informer.go:247] Caches are synced for daemon sets I0324 20:37:31.320201 32984 out.go:129] I0324 18:35:48.372021 1 shared_informer.go:247] Caches are synced for deployment I0324 20:37:31.320971 32984 out.go:129] I0324 18:35:48.372270 1 shared_informer.go:247] Caches are synced for endpoint I0324 20:37:31.321701 32984 out.go:129] I0324 18:35:48.372343 1 shared_informer.go:247] Caches are synced for expand I0324 20:37:31.322508 32984 out.go:129] I0324 18:35:48.372855 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0324 20:37:31.323421 32984 out.go:129] I0324 18:35:48.376450 1 range_allocator.go:373] Set node ubuntu PodCIDR to [10.244.0.0/24] I0324 20:37:31.324113 32984 out.go:129] I0324 18:35:48.377820 1 shared_informer.go:247] Caches are synced for namespace I0324 20:37:31.324820 32984 out.go:129] I0324 18:35:48.378783 1 event.go:291] "Event occurred" object="metallb-system/controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set controller-7646d7cd58 to 1" I0324 20:37:31.325504 32984 out.go:129] I0324 18:35:48.383775 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1" I0324 20:37:31.326204 32984 out.go:129] I0324 18:35:48.387512 1 shared_informer.go:247] Caches are synced for GC I0324 20:37:31.326914 32984 out.go:129] I0324 18:35:48.389172 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-rc4s5" I0324 20:37:31.327618 32984 out.go:129] I0324 18:35:48.398062 1 shared_informer.go:247] Caches are synced for ReplicaSet I0324 20:37:31.328325 32984 out.go:129] I0324 18:35:48.410550 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-q8fqk" I0324 20:37:31.329042 32984 out.go:129] I0324 18:35:48.410864 1 shared_informer.go:247] Caches are synced for taint I0324 20:37:31.329741 32984 out.go:129] I0324 18:35:48.411003 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone: I0324 20:37:31.330385 32984 out.go:129] I0324 18:35:48.411116 1 event.go:291] "Event occurred" object="ubuntu" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ubuntu event: Registered Node ubuntu in Controller" I0324 20:37:31.331202 32984 out.go:129] I0324 18:35:48.411023 1 taint_manager.go:187] Starting NoExecuteTaintManager I0324 20:37:31.332047 32984 out.go:129] W0324 18:35:48.413943 1 node_lifecycle_controller.go:1044] Missing timestamp for Node ubuntu. Assuming now as a timestamp. I0324 20:37:31.333166 32984 out.go:129] I0324 18:35:48.414357 1 node_lifecycle_controller.go:1195] Controller detected that all Nodes are not-Ready. Entering master disruption mode. I0324 20:37:31.334150 32984 out.go:129] I0324 18:35:48.421210 1 shared_informer.go:247] Caches are synced for PVC protection I0324 20:37:31.335003 32984 out.go:129] I0324 18:35:48.421725 1 shared_informer.go:247] Caches are synced for stateful set I0324 20:37:31.335748 32984 out.go:129] I0324 18:35:48.423889 1 shared_informer.go:247] Caches are synced for disruption I0324 20:37:31.336506 32984 out.go:129] I0324 18:35:48.423991 1 disruption.go:339] Sending events to api server. I0324 20:37:31.337376 32984 out.go:129] I0324 18:35:48.424044 1 shared_informer.go:247] Caches are synced for ReplicationController I0324 20:37:31.338179 32984 out.go:129] I0324 18:35:48.425127 1 event.go:291] "Event occurred" object="metallb-system/controller-7646d7cd58" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: controller-7646d7cd58-4hscq" I0324 20:37:31.338995 32984 out.go:129] I0324 18:35:48.427220 1 shared_informer.go:247] Caches are synced for persistent volume I0324 20:37:31.339898 32984 out.go:129] E0324 18:35:48.430762 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"6459f5be-e096-4c0d-873d-0d9dae4d9148", ResourceVersion:"257", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63752207733, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc000a50200), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc000a50220)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc000a50240), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001533d40), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a50260), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc000a50280), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc000a502e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc001015200), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00058bf88), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000bcad90), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e868)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0018a2008)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again I0324 20:37:31.341062 32984 out.go:129] I0324 18:35:48.494790 1 shared_informer.go:247] Caches are synced for job I0324 20:37:31.341916 32984 out.go:129] I0324 18:35:48.514373 1 event.go:291] "Event occurred" object="metallb-system/speaker" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: speaker-vt68t" I0324 20:37:31.342713 32984 out.go:129] I0324 18:35:48.565999 1 shared_informer.go:247] Caches are synced for ClusterRoleAggregator I0324 20:37:31.343447 32984 out.go:129] I0324 18:35:48.571789 1 shared_informer.go:247] Caches are synced for resource quota I0324 20:37:31.344152 32984 out.go:129] I0324 18:35:48.577288 1 shared_informer.go:247] Caches are synced for resource quota I0324 20:37:31.344838 32984 out.go:129] E0324 18:35:48.582559 1 clusterroleaggregation_controller.go:181] edit failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "edit": the object has been modified; please apply your changes to the latest version and try again I0324 20:37:31.345542 32984 out.go:129] E0324 18:35:48.582629 1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again I0324 20:37:31.346250 32984 out.go:129] I0324 18:35:48.884670 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0324 20:37:31.346961 32984 out.go:129] I0324 18:35:48.986025 1 shared_informer.go:247] Caches are synced for garbage collector I0324 20:37:31.347660 32984 out.go:129] I0324 18:35:49.071971 1 shared_informer.go:247] Caches are synced for garbage collector I0324 20:37:31.348482 32984 out.go:129] I0324 18:35:49.072017 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0324 20:37:31.349357 32984 out.go:129] I0324 18:35:53.415202 1 node_lifecycle_controller.go:1222] Controller detected that some Nodes are Ready. Exiting master disruption mode. I0324 20:37:31.350261 32984 out.go:129] I0324 20:37:31.351158 32984 out.go:129] ==> kube-proxy [954a17c3f5ef] <== I0324 20:37:31.351223 32984 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 60 954a17c3f5ef" I0324 20:37:31.412871 32984 out.go:129] I0324 18:35:51.310107 1 node.go:172] Successfully retrieved node IP: 192.168.1.6 I0324 20:37:31.413906 32984 out.go:129] I0324 18:35:51.310285 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.1.6), assume IPv4 operation I0324 20:37:31.414871 32984 out.go:129] W0324 18:35:51.367796 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy I0324 20:37:31.415772 32984 out.go:129] I0324 18:35:51.367950 1 server_others.go:185] Using iptables Proxier. I0324 20:37:31.416837 32984 out.go:129] I0324 18:35:51.401389 1 server.go:650] Version: v1.20.2 I0324 20:37:31.417667 32984 out.go:129] I0324 18:35:51.402024 1 conntrack.go:52] Setting nf_conntrack_max to 262144 I0324 20:37:31.418554 32984 out.go:129] I0324 18:35:51.402134 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I0324 20:37:31.419241 32984 out.go:129] I0324 18:35:51.402199 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I0324 20:37:31.419902 32984 out.go:129] I0324 18:35:51.403872 1 config.go:315] Starting service config controller I0324 20:37:31.420751 32984 out.go:129] I0324 18:35:51.404362 1 config.go:224] Starting endpoint slice config controller I0324 20:37:31.421786 32984 out.go:129] I0324 18:35:51.404394 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0324 20:37:31.422665 32984 out.go:129] I0324 18:35:51.404525 1 shared_informer.go:240] Waiting for caches to sync for service config I0324 20:37:31.423388 32984 out.go:129] I0324 18:35:51.504829 1 shared_informer.go:247] Caches are synced for service config I0324 20:37:31.424137 32984 out.go:129] I0324 18:35:51.504829 1 shared_informer.go:247] Caches are synced for endpoint slice config I0324 20:37:31.424859 32984 out.go:129] I0324 20:37:31.425639 32984 out.go:129] ==> kube-scheduler [05b06e91f987] <== I0324 20:37:31.425695 32984 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 60 05b06e91f987" I0324 20:37:31.486466 32984 out.go:129] I0324 18:35:25.951348 1 serving.go:331] Generated self-signed cert in-memory I0324 20:37:31.487641 32984 out.go:129] W0324 18:35:29.847797 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA' I0324 20:37:31.488730 32984 out.go:129] W0324 18:35:29.847859 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system" I0324 20:37:31.489741 32984 out.go:129] W0324 18:35:29.847875 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous. I0324 20:37:31.490669 32984 out.go:129] W0324 18:35:29.847883 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false I0324 20:37:31.492137 32984 out.go:129] I0324 18:35:29.868831 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0324 20:37:31.493036 32984 out.go:129] I0324 18:35:29.868878 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0324 20:37:31.493736 32984 out.go:129] I0324 18:35:29.869506 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259 I0324 20:37:31.494481 32984 out.go:129] I0324 18:35:29.869573 1 tlsconfig.go:240] Starting DynamicServingCertificateController I0324 20:37:31.495212 32984 out.go:129] E0324 18:35:29.871693 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope I0324 20:37:31.495949 32984 out.go:129] E0324 18:35:29.871970 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0324 20:37:31.496679 32984 out.go:129] E0324 18:35:29.876371 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope I0324 20:37:31.497364 32984 out.go:129] E0324 18:35:29.876671 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope I0324 20:37:31.498393 32984 out.go:129] E0324 18:35:29.876890 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope I0324 20:37:31.499286 32984 out.go:129] E0324 18:35:29.876703 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope I0324 20:37:31.500275 32984 out.go:129] E0324 18:35:29.876711 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope I0324 20:37:31.501134 32984 out.go:129] E0324 18:35:29.876728 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope I0324 20:37:31.502092 32984 out.go:129] E0324 18:35:29.876741 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope I0324 20:37:31.503067 32984 out.go:129] E0324 18:35:29.876817 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope I0324 20:37:31.503898 32984 out.go:129] E0324 18:35:29.876823 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope I0324 20:37:31.504701 32984 out.go:129] E0324 18:35:29.877349 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope I0324 20:37:31.505465 32984 out.go:129] E0324 18:35:30.723317 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope I0324 20:37:31.506209 32984 out.go:129] E0324 18:35:30.826403 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope I0324 20:37:31.507013 32984 out.go:129] E0324 18:35:30.857293 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope I0324 20:37:31.507702 32984 out.go:129] E0324 18:35:30.912168 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope I0324 20:37:31.508560 32984 out.go:129] E0324 18:35:30.943915 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope I0324 20:37:31.509258 32984 out.go:129] E0324 18:35:31.030154 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope I0324 20:37:31.510033 32984 out.go:129] E0324 18:35:31.112170 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope I0324 20:37:31.510737 32984 out.go:129] I0324 18:35:31.469729 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file I0324 20:37:31.511460 32984 out.go:129] I0324 20:37:31.512190 32984 out.go:129] ==> kubelet <== I0324 20:37:31.512251 32984 exec_runner.go:52] Run: /bin/bash -c "sudo journalctl -u kubelet -n 60" I0324 20:37:31.531742 32984 out.go:129] -- Logs begin at Fri 2021-03-19 21:31:59 EET, end at Wed 2021-03-24 20:37:31 EET. -- I0324 20:37:31.532848 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: E0324 20:35:39.626122 31450 kubelet.go:1826] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful] I0324 20:37:31.534141 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: E0324 20:35:39.726411 31450 kubelet.go:1826] skipping pod synchronization - container runtime status check may not have completed yet I0324 20:37:31.535418 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.729106 31450 kubelet_node_status.go:71] Attempting to register node ubuntu I0324 20:37:31.536461 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.737126 31450 kubelet_node_status.go:109] Node ubuntu was previously registered I0324 20:37:31.537274 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.737256 31450 kubelet_node_status.go:74] Successfully registered node ubuntu I0324 20:37:31.538458 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.799671 31450 cpu_manager.go:193] [cpumanager] starting with none policy I0324 20:37:31.539675 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.799711 31450 cpu_manager.go:194] [cpumanager] reconciling every 10s I0324 20:37:31.540737 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.799745 31450 state_mem.go:36] [cpumanager] initializing new in-memory state store I0324 20:37:31.541761 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.799932 31450 state_mem.go:88] [cpumanager] updated default cpuset: "" I0324 20:37:31.542725 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.799941 31450 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]" I0324 20:37:31.543658 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.799951 31450 policy_none.go:43] [cpumanager] none policy: Start I0324 20:37:31.544545 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: W0324 20:35:39.801137 31450 manager.go:594] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found I0324 20:37:31.545527 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.801439 31450 plugin_manager.go:114] Starting Kubelet Plugin Manager I0324 20:37:31.547196 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.926982 31450 topology_manager.go:187] [topologymanager] Topology Admit Handler I0324 20:37:31.548086 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.927190 31450 topology_manager.go:187] [topologymanager] Topology Admit Handler I0324 20:37:31.548958 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.927258 31450 topology_manager.go:187] [topologymanager] Topology Admit Handler I0324 20:37:31.549858 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.927294 31450 topology_manager.go:187] [topologymanager] Topology Admit Handler I0324 20:37:31.550740 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973353 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/b99608f5ec34b1ffda48abcbb9412760-etc-ca-certificates") pod "kube-apiserver-ubuntu" (UID: "b99608f5ec34b1ffda48abcbb9412760") I0324 20:37:31.551517 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973413 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/932a378e61cc6a1200c450416ec5a096-flexvolume-dir") pod "kube-controller-manager-ubuntu" (UID: "932a378e61cc6a1200c450416ec5a096") I0324 20:37:31.552313 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973527 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/932a378e61cc6a1200c450416ec5a096-k8s-certs") pod "kube-controller-manager-ubuntu" (UID: "932a378e61cc6a1200c450416ec5a096") I0324 20:37:31.553226 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973548 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/932a378e61cc6a1200c450416ec5a096-kubeconfig") pod "kube-controller-manager-ubuntu" (UID: "932a378e61cc6a1200c450416ec5a096") I0324 20:37:31.554094 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973576 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/932a378e61cc6a1200c450416ec5a096-usr-share-ca-certificates") pod "kube-controller-manager-ubuntu" (UID: "932a378e61cc6a1200c450416ec5a096") I0324 20:37:31.555099 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973605 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/6b4a0ee8b3d15a1c2e47c15d32e6eb0d-kubeconfig") pod "kube-scheduler-ubuntu" (UID: "6b4a0ee8b3d15a1c2e47c15d32e6eb0d") I0324 20:37:31.556116 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973620 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/4cc7de39985d1c5e97e3c5e69d9af462-etcd-certs") pod "etcd-ubuntu" (UID: "4cc7de39985d1c5e97e3c5e69d9af462") I0324 20:37:31.557629 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973634 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/4cc7de39985d1c5e97e3c5e69d9af462-etcd-data") pod "etcd-ubuntu" (UID: "4cc7de39985d1c5e97e3c5e69d9af462") I0324 20:37:31.558779 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973652 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/932a378e61cc6a1200c450416ec5a096-etc-ca-certificates") pod "kube-controller-manager-ubuntu" (UID: "932a378e61cc6a1200c450416ec5a096") I0324 20:37:31.559939 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973665 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/932a378e61cc6a1200c450416ec5a096-etc-pki") pod "kube-controller-manager-ubuntu" (UID: "932a378e61cc6a1200c450416ec5a096") I0324 20:37:31.560707 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973681 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/932a378e61cc6a1200c450416ec5a096-usr-local-share-ca-certificates") pod "kube-controller-manager-ubuntu" (UID: "932a378e61cc6a1200c450416ec5a096") I0324 20:37:31.561494 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973694 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/b99608f5ec34b1ffda48abcbb9412760-ca-certs") pod "kube-apiserver-ubuntu" (UID: "b99608f5ec34b1ffda48abcbb9412760") I0324 20:37:31.562166 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973709 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-pki" (UniqueName: "kubernetes.io/host-path/b99608f5ec34b1ffda48abcbb9412760-etc-pki") pod "kube-apiserver-ubuntu" (UID: "b99608f5ec34b1ffda48abcbb9412760") I0324 20:37:31.563009 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973722 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/b99608f5ec34b1ffda48abcbb9412760-k8s-certs") pod "kube-apiserver-ubuntu" (UID: "b99608f5ec34b1ffda48abcbb9412760") I0324 20:37:31.563782 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973737 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b99608f5ec34b1ffda48abcbb9412760-usr-local-share-ca-certificates") pod "kube-apiserver-ubuntu" (UID: "b99608f5ec34b1ffda48abcbb9412760") I0324 20:37:31.564649 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973751 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/b99608f5ec34b1ffda48abcbb9412760-usr-share-ca-certificates") pod "kube-apiserver-ubuntu" (UID: "b99608f5ec34b1ffda48abcbb9412760") I0324 20:37:31.565590 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973767 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/932a378e61cc6a1200c450416ec5a096-ca-certs") pod "kube-controller-manager-ubuntu" (UID: "932a378e61cc6a1200c450416ec5a096") I0324 20:37:31.566651 32984 out.go:129] Mar 24 20:35:39 ubuntu kubelet[31450]: I0324 20:35:39.973774 31450 reconciler.go:157] Reconciler: start to sync state I0324 20:37:31.567535 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.397219 31450 topology_manager.go:187] [topologymanager] Topology Admit Handler I0324 20:37:31.568269 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.422048 31450 kuberuntime_manager.go:1006] updating runtime config through cri with podcidr 10.244.0.0/24 I0324 20:37:31.568996 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.422349 31450 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},} I0324 20:37:31.569804 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.422643 31450 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24 I0324 20:37:31.570701 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.519241 31450 topology_manager.go:187] [topologymanager] Topology Admit Handler I0324 20:37:31.571826 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.521957 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/322257a6-4656-4416-9907-600cff97668b-lib-modules") pod "kube-proxy-rc4s5" (UID: "322257a6-4656-4416-9907-600cff97668b") I0324 20:37:31.572673 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.523485 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/322257a6-4656-4416-9907-600cff97668b-kube-proxy") pod "kube-proxy-rc4s5" (UID: "322257a6-4656-4416-9907-600cff97668b") I0324 20:37:31.573453 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.523571 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/322257a6-4656-4416-9907-600cff97668b-xtables-lock") pod "kube-proxy-rc4s5" (UID: "322257a6-4656-4416-9907-600cff97668b") I0324 20:37:31.574379 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.523664 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-nhwpf" (UniqueName: "kubernetes.io/secret/322257a6-4656-4416-9907-600cff97668b-kube-proxy-token-nhwpf") pod "kube-proxy-rc4s5" (UID: "322257a6-4656-4416-9907-600cff97668b") I0324 20:37:31.575153 32984 out.go:129] Mar 24 20:35:48 ubuntu kubelet[31450]: I0324 20:35:48.624452 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "speaker-token-jqsf6" (UniqueName: "kubernetes.io/secret/44ce0d6e-6d27-493b-9089-26867b2efca3-speaker-token-jqsf6") pod "speaker-vt68t" (UID: "44ce0d6e-6d27-493b-9089-26867b2efca3") I0324 20:37:31.576019 32984 out.go:129] Mar 24 20:35:51 ubuntu kubelet[31450]: I0324 20:35:51.005890 31450 topology_manager.go:187] [topologymanager] Topology Admit Handler I0324 20:37:31.576865 32984 out.go:129] Mar 24 20:35:51 ubuntu kubelet[31450]: I0324 20:35:51.007620 31450 topology_manager.go:187] [topologymanager] Topology Admit Handler I0324 20:37:31.577707 32984 out.go:129] Mar 24 20:35:51 ubuntu kubelet[31450]: W0324 20:35:51.010873 31450 pod_container_deletor.go:79] Container "07f7ff2febacb24fb9b57b9515791360cfd838e4d5ae3461ec3dad876f4ce334" not found in pod's containers I0324 20:37:31.578515 32984 out.go:129] Mar 24 20:35:51 ubuntu kubelet[31450]: W0324 20:35:51.017463 31450 pod_container_deletor.go:79] Container "95b89051def9f432bfc576cc09922a3c514d5c3326b5b1ba0afd8dc041227e7c" not found in pod's containers I0324 20:37:31.579514 32984 out.go:129] Mar 24 20:35:51 ubuntu kubelet[31450]: I0324 20:35:51.138460 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-68xtb" (UniqueName: "kubernetes.io/secret/294883fd-93ae-4876-9334-264f2141d974-coredns-token-68xtb") pod "coredns-74ff55c5b-q8fqk" (UID: "294883fd-93ae-4876-9334-264f2141d974") I0324 20:37:31.580560 32984 out.go:129] Mar 24 20:35:51 ubuntu kubelet[31450]: I0324 20:35:51.138510 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "controller-token-jj245" (UniqueName: "kubernetes.io/secret/17404154-4924-4abd-baf3-6e74be215f11-controller-token-jj245") pod "controller-7646d7cd58-4hscq" (UID: "17404154-4924-4abd-baf3-6e74be215f11") I0324 20:37:31.581497 32984 out.go:129] Mar 24 20:35:51 ubuntu kubelet[31450]: I0324 20:35:51.138530 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/294883fd-93ae-4876-9334-264f2141d974-config-volume") pod "coredns-74ff55c5b-q8fqk" (UID: "294883fd-93ae-4876-9334-264f2141d974") I0324 20:37:31.582399 32984 out.go:129] Mar 24 20:35:51 ubuntu kubelet[31450]: W0324 20:35:51.788498 31450 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-q8fqk through plugin: invalid network status for I0324 20:37:31.583616 32984 out.go:129] Mar 24 20:35:52 ubuntu kubelet[31450]: W0324 20:35:52.172782 31450 pod_container_deletor.go:79] Container "d5b71ec9bb20dc8953f9a0c4dec29657dda59af2b6c693a1e0c4c42e4a496a36" not found in pod's containers I0324 20:37:31.584461 32984 out.go:129] Mar 24 20:35:52 ubuntu kubelet[31450]: W0324 20:35:52.174014 31450 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for metallb-system/controller-7646d7cd58-4hscq through plugin: invalid network status for I0324 20:37:31.585276 32984 out.go:129] Mar 24 20:35:52 ubuntu kubelet[31450]: W0324 20:35:52.175876 31450 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-74ff55c5b-q8fqk through plugin: invalid network status for I0324 20:37:31.586123 32984 out.go:129] Mar 24 20:35:53 ubuntu kubelet[31450]: I0324 20:35:53.003526 31450 topology_manager.go:187] [topologymanager] Topology Admit Handler I0324 20:37:31.586970 32984 out.go:129] Mar 24 20:35:53 ubuntu kubelet[31450]: I0324 20:35:53.149709 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/b1a31819-050a-4a35-a815-91b404320d39-tmp") pod "storage-provisioner" (UID: "b1a31819-050a-4a35-a815-91b404320d39") I0324 20:37:31.588016 32984 out.go:129] Mar 24 20:35:53 ubuntu kubelet[31450]: I0324 20:35:53.149811 31450 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-jwgsk" (UniqueName: "kubernetes.io/secret/b1a31819-050a-4a35-a815-91b404320d39-storage-provisioner-token-jwgsk") pod "storage-provisioner" (UID: "b1a31819-050a-4a35-a815-91b404320d39") I0324 20:37:31.588940 32984 out.go:129] Mar 24 20:35:53 ubuntu kubelet[31450]: W0324 20:35:53.207971 31450 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for metallb-system/controller-7646d7cd58-4hscq through plugin: invalid network status for I0324 20:37:31.589831 32984 out.go:129] I0324 20:37:31.590853 32984 out.go:129] ==> storage-provisioner [696e909b7688] <== I0324 20:37:31.590924 32984 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 60 696e909b7688" I0324 20:37:31.654335 32984 out.go:129] I0324 18:35:53.742430 1 storage_provisioner.go:115] Initializing the minikube storage provisioner... I0324 20:37:31.656842 32984 out.go:129] I0324 18:35:53.764149 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service! I0324 20:37:31.658507 32984 out.go:129] I0324 18:35:53.764273 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0324 20:37:31.659415 32984 out.go:129] I0324 18:35:53.778989 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0324 20:37:31.660278 32984 out.go:129] I0324 18:35:53.779155 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2814707b-0c8b-4f0e-a78d-56db5fb7822a", APIVersion:"v1", ResourceVersion:"499", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ubuntu_208ca97d-a769-4142-a4de-f80392f379d3 became leader I0324 20:37:31.661024 32984 out.go:129] I0324 18:35:53.779193 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_ubuntu_208ca97d-a769-4142-a4de-f80392f379d3! I0324 20:37:31.661792 32984 out.go:129] I0324 18:35:53.879721 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_ubuntu_208ca97d-a769-4142-a4de-f80392f379d3! I0324 20:37:31.662473 32984 out.go:129] I0324 20:37:31.663200 32984 out.go:129] ==> Audit <== I0324 20:37:31.680691 32984 out.go:129] |---------|--------------------------------------|----------|------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|--------------------------------------|----------|------|---------|-------------------------------|-------------------------------| | start | --embed-certs --driver=none | minikube | root | v1.18.1 | Fri, 19 Mar 2021 13:14:10 PDT | Fri, 19 Mar 2021 13:21:34 PDT | | addons | configure metallb | minikube | root | v1.18.1 | Fri, 19 Mar 2021 13:21:48 PDT | Fri, 19 Mar 2021 13:21:59 PDT | | addons | enable metallb | minikube | root | v1.18.1 | Fri, 19 Mar 2021 13:22:13 PDT | Fri, 19 Mar 2021 13:22:14 PDT | | start | | minikube | root | v1.18.1 | Sun, 21 Mar 2021 02:52:16 EET | Sun, 21 Mar 2021 02:53:15 EET | | start | | minikube | root | v1.18.1 | Sun, 21 Mar 2021 23:49:16 EET | Sun, 21 Mar 2021 23:50:44 EET | | addons | list | minikube | root | v1.18.1 | Mon, 22 Mar 2021 00:26:29 EET | Mon, 22 Mar 2021 00:26:29 EET | | addons | enable metrics-server | minikube | root | v1.18.1 | Mon, 22 Mar 2021 00:28:15 EET | Mon, 22 Mar 2021 00:28:16 EET | | addons | enable dashboard | minikube | root | v1.18.1 | Mon, 22 Mar 2021 00:28:51 EET | Mon, 22 Mar 2021 00:28:51 EET | | addons | list | minikube | root | v1.18.1 | Mon, 22 Mar 2021 00:38:10 EET | Mon, 22 Mar 2021 00:38:10 EET | | addons | disable metrics-server | minikube | root | v1.18.1 | Mon, 22 Mar 2021 00:41:13 EET | Mon, 22 Mar 2021 00:41:13 EET | | addons | list | minikube | root | v1.18.1 | Mon, 22 Mar 2021 00:41:37 EET | Mon, 22 Mar 2021 00:41:37 EET | | addons | list | minikube | root | v1.18.1 | Mon, 22 Mar 2021 00:47:05 EET | Mon, 22 Mar 2021 00:47:05 EET | | start | --embed-certs --driver=none | minikube | root | v1.18.1 | Mon, 22 Mar 2021 00:47:54 EET | Mon, 22 Mar 2021 00:49:22 EET | | | --insecure-registry=192.168.1.6:2376 | | | | | | | | --extra-config | | | | | | | | kubelet.node-ip=192.168.1.6 | | | | | | | | --apiserver-ips=192.168.1.6 | | | | | | | | --apiserver-port=2376 | | | | | | | addons | disable dashboard | minikube | root | v1.18.1 | Mon, 22 Mar 2021 00:49:32 EET | Mon, 22 Mar 2021 00:50:37 EET | | delete | | minikube | root | v1.18.1 | Mon, 22 Mar 2021 08:10:12 EET | Mon, 22 Mar 2021 08:10:32 EET | | start | | minikube | root | v1.18.1 | Mon, 22 Mar 2021 08:12:53 EET | Mon, 22 Mar 2021 08:15:03 EET | | stop | | minikube | root | v1.18.1 | Mon, 22 Mar 2021 09:13:32 EET | Mon, 22 Mar 2021 09:13:32 EET | | start | --embed-certs --driver=none | minikube | root | v1.18.1 | Mon, 22 Mar 2021 09:13:40 EET | Mon, 22 Mar 2021 09:18:17 EET | | | --extra-config=kubelet.max-pods=1000 | | | | | | | logs | | minikube | root | v1.18.1 | Mon, 22 Mar 2021 09:24:25 EET | Mon, 22 Mar 2021 09:24:29 EET | | addons | list | minikube | root | v1.18.1 | Mon, 22 Mar 2021 09:25:02 EET | Mon, 22 Mar 2021 09:25:02 EET | | addons | configure metallb | minikube | root | v1.18.1 | Mon, 22 Mar 2021 09:25:18 EET | Mon, 22 Mar 2021 09:25:27 EET | | addons | enable metallb | minikube | root | v1.18.1 | Mon, 22 Mar 2021 09:25:51 EET | Mon, 22 Mar 2021 09:25:52 EET | | stop | | minikube | root | v1.18.1 | Mon, 22 Mar 2021 09:39:56 EET | Mon, 22 Mar 2021 09:40:03 EET | | start | | minikube | root | v1.18.1 | Mon, 22 Mar 2021 09:43:22 EET | Mon, 22 Mar 2021 09:44:44 EET | | stop | | minikube | root | v1.18.1 | Mon, 22 Mar 2021 15:40:45 EET | Mon, 22 Mar 2021 15:41:29 EET | | start | | minikube | root | v1.18.1 | Tue, 23 Mar 2021 06:53:18 EET | Tue, 23 Mar 2021 06:54:48 EET | | stop | | minikube | root | v1.18.1 | Tue, 23 Mar 2021 17:50:15 EET | Tue, 23 Mar 2021 17:50:22 EET | | start | | minikube | root | v1.18.1 | Wed, 24 Mar 2021 07:48:28 EET | Wed, 24 Mar 2021 07:50:15 EET | | stop | | minikube | root | v1.18.1 | Wed, 24 Mar 2021 17:22:44 EET | Wed, 24 Mar 2021 17:22:56 EET | | stop | | minikube | root | v1.18.1 | Wed, 24 Mar 2021 20:29:28 EET | Wed, 24 Mar 2021 20:29:33 EET | | start | | minikube | root | v1.18.1 | Wed, 24 Mar 2021 20:29:55 EET | Wed, 24 Mar 2021 20:34:34 EET | | start | | minikube | root | v1.18.1 | Wed, 24 Mar 2021 20:27:50 EET | Wed, 24 Mar 2021 20:35:40 EET | |---------|--------------------------------------|----------|------|---------|-------------------------------|-------------------------------| I0324 20:37:31.682610 32984 out.go:129] I0324 20:37:31.683629 32984 out.go:129] ==> Last Start <== I0324 20:37:31.684796 32984 out.go:129] Log file created at: 2021/03/24 20:29:55 I0324 20:37:31.686034 32984 out.go:129] Running on machine: ubuntu I0324 20:37:31.687094 32984 out.go:129] Binary: Built with gc go1.16 for linux/amd64 I0324 20:37:31.688186 32984 out.go:129] Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0324 20:37:31.689055 32984 out.go:129] I0324 20:29:55.331485 8815 out.go:239] Setting OutFile to fd 1 ... I0324 20:37:31.689979 32984 out.go:129] I0324 20:29:55.331618 8815 out.go:291] isatty.IsTerminal(1) = true I0324 20:37:31.691451 32984 out.go:129] I0324 20:29:55.331623 8815 out.go:252] Setting ErrFile to fd 2... I0324 20:37:31.692717 32984 out.go:129] I0324 20:29:55.331627 8815 out.go:291] isatty.IsTerminal(2) = true I0324 20:37:31.693948 32984 out.go:129] I0324 20:29:55.331750 8815 root.go:308] Updating PATH: /root/.minikube/bin I0324 20:37:31.694987 32984 out.go:129] I0324 20:29:55.331971 8815 out.go:246] Setting JSON to false I0324 20:37:31.696281 32984 out.go:129] I0324 20:29:55.333552 8815 start.go:108] hostinfo: {"hostname":"ubuntu","uptime":138,"bootTime":1616610457,"procs":433,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.8.0-45-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"7c5f4d56-adf4-6fee-22cf-72da73ac621b"} I0324 20:37:31.697234 32984 out.go:129] I0324 20:29:55.333634 8815 start.go:118] virtualization: kvm host I0324 20:37:31.698028 32984 out.go:129] I0324 20:29:55.335156 8815 out.go:129] 😄 minikube v1.18.1 on Ubuntu 20.04 I0324 20:37:31.698808 32984 out.go:129] I0324 20:29:55.335405 8815 notify.go:126] Checking for updates... I0324 20:37:31.699630 32984 out.go:129] I0324 20:29:55.335872 8815 driver.go:323] Setting default libvirt URI to qemu:///system I0324 20:37:31.700331 32984 out.go:129] I0324 20:29:55.336345 8815 exec_runner.go:52] Run: systemctl --version I0324 20:37:31.701054 32984 out.go:129] I0324 20:29:55.339525 8815 out.go:129] ✨ Using the none driver based on existing profile I0324 20:37:31.701911 32984 out.go:129] I0324 20:29:55.339593 8815 start.go:276] selected driver: none I0324 20:37:31.702912 32984 out.go:129] I0324 20:29:55.339597 8815 start.go:718] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4900 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP:192.168.1.100 LoadBalancerEndIP:192.168.1.254 CustomIngressCert: ExtraOptions:[{Component:kubelet Key:max-pods Value:1000} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:192.168.1.6 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:true metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0324 20:37:31.703940 32984 out.go:129] I0324 20:29:55.339719 8815 start.go:729] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:} I0324 20:37:31.704924 32984 out.go:129] I0324 20:29:55.339739 8815 start.go:1281] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf". I0324 20:37:31.705814 32984 out.go:129] I0324 20:29:55.340889 8815 start_flags.go:395] config: I0324 20:37:31.706775 32984 out.go:129] {Name:minikube KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4900 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP:192.168.1.100 LoadBalancerEndIP:192.168.1.254 CustomIngressCert: ExtraOptions:[{Component:kubelet Key:max-pods Value:1000} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:192.168.1.6 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:true metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0324 20:37:31.707847 32984 out.go:129] I0324 20:29:55.341932 8815 out.go:129] 👍 Starting control plane node minikube in cluster minikube I0324 20:37:31.708680 32984 out.go:129] I0324 20:29:55.342115 8815 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ... I0324 20:37:31.709560 32984 out.go:129] I0324 20:29:55.342447 8815 cache.go:185] Successfully downloaded all kic artifacts I0324 20:37:31.710395 32984 out.go:129] I0324 20:29:55.342492 8815 start.go:313] acquiring machines lock for minikube: {Name:mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89 Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0324 20:37:31.711254 32984 out.go:129] I0324 20:29:55.342753 8815 start.go:317] acquired machines lock for "minikube" in 232.687µs I0324 20:37:31.712111 32984 out.go:129] I0324 20:29:55.342768 8815 start.go:93] Skipping create...Using existing machine configuration I0324 20:37:31.712925 32984 out.go:129] I0324 20:29:55.342772 8815 fix.go:55] fixHost starting: m01 I0324 20:37:31.713948 32984 out.go:129] W0324 20:29:55.343638 8815 none.go:130] unable to get port: "minikube" does not appear in /root/.kube/config I0324 20:37:31.715731 32984 out.go:129] I0324 20:29:55.343646 8815 api_server.go:146] Checking apiserver status ... I0324 20:37:31.716609 32984 out.go:129] I0324 20:29:55.343673 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:31.717504 32984 out.go:129] W0324 20:29:55.444238 8815 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: exit status 1 I0324 20:37:31.718407 32984 out.go:129] stdout: I0324 20:37:31.719498 32984 out.go:129] I0324 20:37:31.720675 32984 out.go:129] stderr: I0324 20:37:31.721728 32984 out.go:129] I0324 20:29:55.444295 8815 exec_runner.go:52] Run: sudo systemctl is-active --quiet service kubelet I0324 20:37:31.722764 32984 out.go:129] I0324 20:29:55.455586 8815 fix.go:108] recreateIfNeeded on minikube: state=Stopped err= I0324 20:37:31.723704 32984 out.go:129] W0324 20:29:55.455599 8815 fix.go:134] unexpected machine state, will restart: I0324 20:37:31.724644 32984 out.go:129] I0324 20:29:55.456949 8815 out.go:129] 🔄 Restarting existing none bare metal machine for "minikube" ... I0324 20:37:31.725445 32984 out.go:129] I0324 20:29:55.459834 8815 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ... I0324 20:37:31.726158 32984 out.go:129] I0324 20:29:55.460205 8815 start.go:267] post-start starting for "minikube" (driver="none") I0324 20:37:31.726915 32984 out.go:129] I0324 20:29:55.460218 8815 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0324 20:37:31.727642 32984 out.go:129] I0324 20:29:55.460311 8815 exec_runner.go:52] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0324 20:37:31.728634 32984 out.go:129] I0324 20:29:55.468566 8815 main.go:121] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0324 20:37:31.729369 32984 out.go:129] I0324 20:29:55.468585 8815 main.go:121] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0324 20:37:31.730100 32984 out.go:129] I0324 20:29:55.468594 8815 main.go:121] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0324 20:37:31.730887 32984 out.go:129] I0324 20:29:55.470021 8815 out.go:129] ℹ️ OS release is Ubuntu 20.04.2 LTS I0324 20:37:31.731600 32984 out.go:129] I0324 20:29:55.470087 8815 filesync.go:118] Scanning /root/.minikube/addons for local assets ... I0324 20:37:31.732268 32984 out.go:129] I0324 20:29:55.470148 8815 filesync.go:118] Scanning /root/.minikube/files for local assets ... I0324 20:37:31.732978 32984 out.go:129] I0324 20:29:55.470173 8815 start.go:270] post-start completed in 9.957982ms I0324 20:37:31.733663 32984 out.go:129] I0324 20:29:55.470180 8815 fix.go:95] none is local, skipping auth/time setup (requires ssh) I0324 20:37:31.734498 32984 out.go:129] I0324 20:29:55.470183 8815 fix.go:57] fixHost completed within 127.411558ms I0324 20:37:31.735196 32984 out.go:129] I0324 20:29:55.470186 8815 start.go:80] releasing machines lock for "minikube", held for 127.42592ms I0324 20:37:31.736000 32984 out.go:129] I0324 20:29:55.470877 8815 exec_runner.go:52] Run: sudo systemctl is-active --quiet service containerd I0324 20:37:31.736766 32984 out.go:129] I0324 20:29:55.471147 8815 exec_runner.go:52] Run: curl -sS -m 2 https://k8s.gcr.io/ I0324 20:37:31.737507 32984 out.go:129] I0324 20:29:55.482585 8815 exec_runner.go:52] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock I0324 20:37:31.738294 32984 out.go:129] image-endpoint: unix:///var/run/dockershim.sock I0324 20:37:31.739209 32984 out.go:129] " | sudo tee /etc/crictl.yaml" I0324 20:37:31.740068 32984 out.go:129] I0324 20:29:55.502362 8815 exec_runner.go:52] Run: sudo systemctl cat docker.service I0324 20:37:31.740777 32984 out.go:129] I0324 20:29:55.515579 8815 exec_runner.go:52] Run: sudo systemctl daemon-reload I0324 20:37:31.741493 32984 out.go:129] I0324 20:29:55.912220 8815 exec_runner.go:52] Run: sudo systemctl restart docker I0324 20:37:31.742386 32984 out.go:129] I0324 20:30:05.321837 913 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0324 20:37:31.743086 32984 out.go:129] I0324 20:30:18.338084 8815 exec_runner.go:85] Completed: sudo systemctl restart docker: (22.425838595s) I0324 20:37:31.743809 32984 out.go:129] I0324 20:30:18.338143 8815 exec_runner.go:52] Run: docker version --format I0324 20:37:31.744478 32984 out.go:129] I0324 20:30:18.406716 8815 out.go:150] 🐳 Preparing Kubernetes v1.20.2 on Docker 20.10.5 ... I0324 20:37:31.745302 32984 out.go:129] I0324 20:30:18.406875 8815 exec_runner.go:52] Run: grep 127.0.0.1 host.minikube.internal$ /etc/hosts I0324 20:37:31.746031 32984 out.go:129] I0324 20:30:18.438508 913 exec_runner.go:85] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (13.116644767s) I0324 20:37:31.746776 32984 out.go:129] W0324 20:30:18.438531 913 kubeadm.go:744] addon install failed, wil retry: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": exit status 1 I0324 20:37:31.747618 32984 out.go:129] stdout: I0324 20:37:31.748612 32984 out.go:129] I0324 20:37:31.749679 32984 out.go:129] stderr: I0324 20:37:31.750734 32984 out.go:129] error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns": dial tcp 192.168.1.6:8443: connect: connection refused I0324 20:37:31.752180 32984 out.go:129] To see the stack trace of this error execute with --v=5 or higher I0324 20:37:31.753320 32984 out.go:129] I0324 20:30:18.438551 913 kubeadm.go:598] restartCluster took 1m8.688400933s I0324 20:37:31.754370 32984 out.go:129] W0324 20:30:18.438748 913 out.go:191] ! Unable to restart cluster, will reset it: addons: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": exit status 1 I0324 20:37:31.755185 32984 out.go:129] stdout: I0324 20:37:31.756365 32984 out.go:129] I0324 20:37:31.757446 32984 out.go:129] stderr: I0324 20:37:31.758368 32984 out.go:129] error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns": dial tcp 192.168.1.6:8443: connect: connection refused I0324 20:37:31.759157 32984 out.go:129] To see the stack trace of this error execute with --v=5 or higher I0324 20:37:31.759991 32984 out.go:129] I0324 20:37:31.760858 32984 out.go:129] I0324 20:30:18.438816 913 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0324 20:37:31.761624 32984 out.go:129] I0324 20:30:18.409968 8815 out.go:129] ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf I0324 20:37:31.762353 32984 out.go:129] I0324 20:30:18.410074 8815 preload.go:97] Checking if preload exists for k8s version v1.20.2 and runtime docker I0324 20:37:31.763137 32984 out.go:129] I0324 20:30:18.627564 8815 preload.go:122] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4 I0324 20:37:31.763946 32984 out.go:129] I0324 20:30:18.627627 8815 exec_runner.go:52] Run: docker images --format : I0324 20:37:31.764733 32984 out.go:129] I0324 20:30:18.723861 8815 docker.go:423] Got preloaded images: -- stdout -- I0324 20:37:31.765461 32984 out.go:129] parent-gateway-api:latest I0324 20:37:31.766342 32984 out.go:129] : I0324 20:37:31.767070 32984 out.go:129] parent-gateway-sub:latest I0324 20:37:31.767809 32984 out.go:129] auth-server:latest I0324 20:37:31.768507 32984 out.go:129] ocw-team-api:latest I0324 20:37:31.769211 32984 out.go:129] team-api:latest I0324 20:37:31.769911 32984 out.go:129] ed-timelog-api:latest I0324 20:37:31.770630 32984 out.go:129] oneteam-api:latest I0324 20:37:31.771352 32984 out.go:129] med-plan-api:latest I0324 20:37:31.772066 32984 out.go:129] vac-api:latest I0324 20:37:31.772836 32984 out.go:129] doc-mgr-api:latest I0324 20:37:31.773737 32984 out.go:129] coms-pref-api:latest I0324 20:37:31.774772 32984 out.go:129] reporting-api:latest I0324 20:37:31.775604 32984 out.go:129] domain-sub-workflow:latest I0324 20:37:31.776482 32984 out.go:129] advertisement-api:latest I0324 20:37:31.777675 32984 out.go:129] advert-api:latest I0324 20:37:31.779107 32984 out.go:129] interview-api:latest I0324 20:37:31.780361 32984 out.go:129] candidate-api:latest I0324 20:37:31.781478 32984 out.go:129] booking-api:latest I0324 20:37:31.782693 32984 out.go:129] message-type-api:latest I0324 20:37:31.783812 32984 out.go:129] template-attach-api:latest I0324 20:37:31.785138 32984 out.go:129] template-api:latest I0324 20:37:31.786914 32984 out.go:129] layout-api:latest I0324 20:37:31.791913 32984 out.go:129] survey-api:latest I0324 20:37:31.794753 32984 out.go:129] send-comms-api:latest I0324 20:37:31.795783 32984 out.go:129] comms-mgr:latest I0324 20:37:31.797060 32984 out.go:129] bulk-stat-svc:latest I0324 20:37:31.798194 32984 out.go:129] yammer-api:latest I0324 20:37:31.799158 32984 out.go:129] staffing-pool-api:latest I0324 20:37:31.800148 32984 out.go:129] service-api:latest I0324 20:37:31.801062 32984 out.go:129] roster-api:latest I0324 20:37:31.801870 32984 out.go:129] reward-api:latest I0324 20:37:31.802925 32984 out.go:129] pricing-api:latest I0324 20:37:31.803720 32984 out.go:129] notes-api:latest I0324 20:37:31.804501 32984 out.go:129] hr-api:latest I0324 20:37:31.805450 32984 out.go:129] emp-api:latest I0324 20:37:31.806489 32984 out.go:129] document-api:latest I0324 20:37:31.807952 32984 out.go:129] dashboard-api:latest I0324 20:37:31.809357 32984 out.go:129] red-coms-pref:latest I0324 20:37:31.810928 32984 out.go:129] public-web-api:latest I0324 20:37:31.814174 32984 out.go:129] photo-api:latest I0324 20:37:31.815200 32984 out.go:129] parent-api:latest I0324 20:37:31.816129 32984 out.go:129] one-feedback-api:latest I0324 20:37:31.817016 32984 out.go:129] oc-event-api:latest I0324 20:37:31.817864 32984 out.go:129] workflow-task-api:latest I0324 20:37:31.819594 32984 out.go:129] resources-api:latest I0324 20:37:31.820606 32984 out.go:129] resources-svc:latest I0324 20:37:31.821711 32984 out.go:129] pp-api:latest I0324 20:37:31.822985 32984 out.go:129] marketing-report-svc:latest I0324 20:37:31.824001 32984 out.go:129] careers-api:latest I0324 20:37:31.825092 32984 out.go:129] hr3-pwd-sync-svc:latest I0324 20:37:31.825973 32984 out.go:129] gen-coms-render-svc:latest I0324 20:37:31.826881 32984 out.go:129] gen-coms-metadata-svc:latest I0324 20:37:31.827740 32984 out.go:129] gen-coms-api:latest I0324 20:37:31.828677 32984 out.go:129] field-api:latest I0324 20:37:31.829456 32984 out.go:129] onechild-domain-event-sub:latest I0324 20:37:31.830259 32984 out.go:129] comms-domain-event-sub:latest I0324 20:37:31.831287 32984 out.go:129] domain-event-pub:latest I0324 20:37:31.832227 32984 out.go:129] child-api:latest I0324 20:37:31.833175 32984 out.go:129] billing-api:latest I0324 20:37:31.834098 32984 out.go:129] address-api:latest I0324 20:37:31.835087 32984 out.go:129] auth-mgm:latest I0324 20:37:31.836027 32984 out.go:129] auth-api:latest I0324 20:37:31.837005 32984 out.go:129] rabbitmq:3-management I0324 20:37:31.838175 32984 out.go:129] mcr.microsoft.com/dotnet/sdk:5.0 I0324 20:37:31.839403 32984 out.go:129] mcr.microsoft.com/dotnet/core/sdk:3.1 I0324 20:37:31.840395 32984 out.go:129] rabbitmq:3.8.13-management I0324 20:37:31.841488 32984 out.go:129] rabbitmq:3.8.9-management I0324 20:37:31.842472 32984 out.go:129] k8s.gcr.io/kube-proxy:v1.20.2 I0324 20:37:31.843467 32984 out.go:129] k8s.gcr.io/kube-controller-manager:v1.20.2 I0324 20:37:31.844287 32984 out.go:129] k8s.gcr.io/kube-apiserver:v1.20.2 I0324 20:37:31.845243 32984 out.go:129] k8s.gcr.io/kube-scheduler:v1.20.2 I0324 20:37:31.846619 32984 out.go:129] gcr.io/k8s-minikube/storage-provisioner:v4 I0324 20:37:31.847710 32984 out.go:129] k8s.gcr.io/etcd:3.4.13-0 I0324 20:37:31.848753 32984 out.go:129] k8s.gcr.io/coredns:1.7.0 I0324 20:37:31.850024 32984 out.go:129] k8s.gcr.io/pause:3.2 I0324 20:37:31.850877 32984 out.go:129] metallb/controller: I0324 20:37:31.851742 32984 out.go:129] metallb/speaker: I0324 20:37:31.852566 32984 out.go:129] I0324 20:37:31.853392 32984 out.go:129] -- /stdout -- I0324 20:37:31.859563 32984 out.go:129] I0324 20:30:18.723872 8815 docker.go:429] kubernetesui/dashboard:v2.1.0 wasn't preloaded I0324 20:37:31.860652 32984 out.go:129] I0324 20:30:18.723909 8815 exec_runner.go:52] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json I0324 20:37:31.861710 32984 out.go:129] I0324 20:30:18.732881 8815 exec_runner.go:52] Run: which lz4 I0324 20:37:31.862914 32984 out.go:129] I0324 20:30:18.733965 8815 kubeadm.go:880] preload failed, will try to load cached images: getting file asset: open: open /root/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v9-v1.20.2-docker-overlay2-amd64.tar.lz4: no such file or directory I0324 20:37:31.864306 32984 out.go:129] I0324 20:30:18.734009 8815 exec_runner.go:52] Run: docker info --format I0324 20:37:31.865380 32984 out.go:129] I0324 20:30:18.867308 8815 cni.go:74] Creating CNI manager for "" I0324 20:37:31.866729 32984 out.go:129] I0324 20:30:18.867323 8815 cni.go:129] Driver none used, CNI unnecessary in this configuration, recommending no CNI I0324 20:37:31.868119 32984 out.go:129] I0324 20:30:18.867335 8815 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0324 20:37:31.869393 32984 out.go:129] I0324 20:30:18.867357 8815 kubeadm.go:150] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.1.6 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:ubuntu DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.1.6"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.1.6 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0324 20:37:31.870448 32984 out.go:129] I0324 20:30:18.867497 8815 kubeadm.go:154] kubeadm config: I0324 20:37:31.871450 32984 out.go:129] apiVersion: kubeadm.k8s.io/v1beta2 I0324 20:37:31.872332 32984 out.go:129] kind: InitConfiguration I0324 20:37:31.873403 32984 out.go:129] localAPIEndpoint: I0324 20:37:31.874338 32984 out.go:129] advertiseAddress: 192.168.1.6 I0324 20:37:31.875299 32984 out.go:129] bindPort: 8443 I0324 20:37:31.876102 32984 out.go:129] bootstrapTokens: I0324 20:37:31.877181 32984 out.go:129] - groups: I0324 20:37:31.878179 32984 out.go:129] - system:bootstrappers:kubeadm:default-node-token I0324 20:37:31.879030 32984 out.go:129] ttl: 24h0m0s I0324 20:37:31.880012 32984 out.go:129] usages: I0324 20:37:31.880880 32984 out.go:129] - signing I0324 20:37:31.883370 32984 out.go:129] - authentication I0324 20:37:31.884659 32984 out.go:129] nodeRegistration: I0324 20:37:31.885803 32984 out.go:129] criSocket: /var/run/dockershim.sock I0324 20:37:31.886732 32984 out.go:129] name: "ubuntu" I0324 20:37:31.887565 32984 out.go:129] kubeletExtraArgs: I0324 20:37:31.888485 32984 out.go:129] node-ip: 192.168.1.6 I0324 20:37:31.889465 32984 out.go:129] taints: [] I0324 20:37:31.890509 32984 out.go:129] --- I0324 20:37:31.891355 32984 out.go:129] apiVersion: kubeadm.k8s.io/v1beta2 I0324 20:37:31.892115 32984 out.go:129] kind: ClusterConfiguration I0324 20:37:31.892941 32984 out.go:129] apiServer: I0324 20:37:31.893702 32984 out.go:129] certSANs: ["127.0.0.1", "localhost", "192.168.1.6"] I0324 20:37:31.894594 32984 out.go:129] extraArgs: I0324 20:37:31.895326 32984 out.go:129] enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" I0324 20:37:31.896100 32984 out.go:129] controllerManager: I0324 20:37:31.896917 32984 out.go:129] extraArgs: I0324 20:37:31.897700 32984 out.go:129] allocate-node-cidrs: "true" I0324 20:37:31.898511 32984 out.go:129] leader-elect: "false" I0324 20:37:31.899219 32984 out.go:129] scheduler: I0324 20:37:31.902527 32984 out.go:129] extraArgs: I0324 20:37:31.903729 32984 out.go:129] leader-elect: "false" I0324 20:37:31.904883 32984 out.go:129] certificatesDir: /var/lib/minikube/certs I0324 20:37:31.906221 32984 out.go:129] clusterName: mk I0324 20:37:31.907073 32984 out.go:129] controlPlaneEndpoint: control-plane.minikube.internal:8443 I0324 20:37:31.907902 32984 out.go:129] dns: I0324 20:37:31.908805 32984 out.go:129] type: CoreDNS I0324 20:37:31.909542 32984 out.go:129] etcd: I0324 20:37:31.910407 32984 out.go:129] local: I0324 20:37:31.911251 32984 out.go:129] dataDir: /var/lib/minikube/etcd I0324 20:37:31.911962 32984 out.go:129] extraArgs: I0324 20:37:31.912649 32984 out.go:129] proxy-refresh-interval: "70000" I0324 20:37:31.913453 32984 out.go:129] kubernetesVersion: v1.20.2 I0324 20:37:31.914141 32984 out.go:129] networking: I0324 20:37:31.914876 32984 out.go:129] dnsDomain: cluster.local I0324 20:37:31.915601 32984 out.go:129] podSubnet: "10.244.0.0/16" I0324 20:37:31.916359 32984 out.go:129] serviceSubnet: 10.96.0.0/12 I0324 20:37:31.917130 32984 out.go:129] --- I0324 20:37:31.918079 32984 out.go:129] apiVersion: kubelet.config.k8s.io/v1beta1 I0324 20:37:31.918935 32984 out.go:129] kind: KubeletConfiguration I0324 20:37:31.919799 32984 out.go:129] authentication: I0324 20:37:31.920612 32984 out.go:129] x509: I0324 20:37:31.921362 32984 out.go:129] clientCAFile: /var/lib/minikube/certs/ca.crt I0324 20:37:31.922068 32984 out.go:129] cgroupDriver: cgroupfs I0324 20:37:31.922954 32984 out.go:129] clusterDomain: "cluster.local" I0324 20:37:31.923706 32984 out.go:129] # disable disk resource management by default I0324 20:37:31.924382 32984 out.go:129] imageGCHighThresholdPercent: 100 I0324 20:37:31.925069 32984 out.go:129] evictionHard: I0324 20:37:31.925924 32984 out.go:129] nodefs.available: "0%" I0324 20:37:31.926851 32984 out.go:129] nodefs.inodesFree: "0%" I0324 20:37:31.927983 32984 out.go:129] imagefs.available: "0%" I0324 20:37:31.929127 32984 out.go:129] failSwapOn: false I0324 20:37:31.930577 32984 out.go:129] staticPodPath: /etc/kubernetes/manifests I0324 20:37:31.932362 32984 out.go:129] --- I0324 20:37:31.933489 32984 out.go:129] apiVersion: kubeproxy.config.k8s.io/v1alpha1 I0324 20:37:31.934921 32984 out.go:129] kind: KubeProxyConfiguration I0324 20:37:31.935873 32984 out.go:129] clusterCIDR: "10.244.0.0/16" I0324 20:37:31.936658 32984 out.go:129] metricsBindAddress: 0.0.0.0:10249 I0324 20:37:31.937515 32984 out.go:129] I0324 20:37:31.938359 32984 out.go:129] I0324 20:30:18.867595 8815 kubeadm.go:919] kubelet [Unit] I0324 20:37:31.939181 32984 out.go:129] Wants=docker.socket I0324 20:37:31.939986 32984 out.go:129] I0324 20:37:31.940806 32984 out.go:129] [Service] I0324 20:37:31.941652 32984 out.go:129] ExecStart= I0324 20:37:31.942439 32984 out.go:129] ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ubuntu --kubeconfig=/etc/kubernetes/kubelet.conf --max-pods=1000 --node-ip=192.168.1.6 --resolv-conf=/run/systemd/resolve/resolv.conf I0324 20:37:31.943376 32984 out.go:129] I0324 20:37:31.944271 32984 out.go:129] [Install] I0324 20:37:31.945197 32984 out.go:129] config: I0324 20:37:31.946208 32984 out.go:129] {KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP:192.168.1.100 LoadBalancerEndIP:192.168.1.254 CustomIngressCert: ExtraOptions:[{Component:kubelet Key:max-pods Value:1000} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0324 20:37:31.947106 32984 out.go:129] I0324 20:30:18.867644 8815 exec_runner.go:52] Run: sudo ls /var/lib/minikube/binaries/v1.20.2 I0324 20:37:31.947941 32984 out.go:129] I0324 20:30:18.936783 8815 binaries.go:44] Found k8s binaries, skipping transfer I0324 20:37:31.948947 32984 out.go:129] I0324 20:30:18.936860 8815 exec_runner.go:52] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0324 20:37:31.949758 32984 out.go:129] I0324 20:30:18.948003 8815 exec_runner.go:145] found /etc/systemd/system/kubelet.service.d/10-kubeadm.conf, removing ... I0324 20:37:31.950557 32984 out.go:129] I0324 20:30:18.948013 8815 exec_runner.go:190] rm: /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I0324 20:37:31.951343 32984 out.go:129] I0324 20:30:18.948111 8815 exec_runner.go:152] cp: memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (394 bytes) I0324 20:37:31.952057 32984 out.go:129] I0324 20:30:18.948265 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube700630499 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf I0324 20:37:31.952758 32984 out.go:129] I0324 20:30:18.959342 8815 exec_runner.go:145] found /lib/systemd/system/kubelet.service, removing ... I0324 20:37:31.953807 32984 out.go:129] I0324 20:30:18.959365 8815 exec_runner.go:190] rm: /lib/systemd/system/kubelet.service I0324 20:37:31.954678 32984 out.go:129] I0324 20:30:18.959414 8815 exec_runner.go:152] cp: memory --> /lib/systemd/system/kubelet.service (352 bytes) I0324 20:37:31.955582 32984 out.go:129] I0324 20:30:18.959518 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube064902630 /lib/systemd/system/kubelet.service I0324 20:37:31.956681 32984 out.go:129] I0324 20:30:18.969279 8815 exec_runner.go:145] found /var/tmp/minikube/kubeadm.yaml.new, removing ... I0324 20:37:31.957670 32984 out.go:129] I0324 20:30:18.969291 8815 exec_runner.go:190] rm: /var/tmp/minikube/kubeadm.yaml.new I0324 20:37:31.958908 32984 out.go:129] I0324 20:30:18.969341 8815 exec_runner.go:152] cp: memory --> /var/tmp/minikube/kubeadm.yaml.new (1835 bytes) I0324 20:37:31.960588 32984 out.go:129] I0324 20:30:18.969503 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube960243213 /var/tmp/minikube/kubeadm.yaml.new I0324 20:37:31.962001 32984 out.go:129] I0324 20:30:18.978236 8815 exec_runner.go:52] Run: grep 192.168.1.6 control-plane.minikube.internal$ /etc/hosts I0324 20:37:31.963068 32984 out.go:129] I0324 20:30:18.979879 8815 certs.go:52] Setting up /root/.minikube/profiles/minikube for IP: 192.168.1.6 I0324 20:37:31.964141 32984 out.go:129] I0324 20:30:18.979911 8815 certs.go:171] skipping minikubeCA CA generation: /root/.minikube/ca.key I0324 20:37:31.965179 32984 out.go:129] I0324 20:30:18.979925 8815 certs.go:171] skipping proxyClientCA CA generation: /root/.minikube/proxy-client-ca.key I0324 20:37:31.966473 32984 out.go:129] I0324 20:30:18.979970 8815 certs.go:275] skipping minikube-user signed cert generation: /root/.minikube/profiles/minikube/client.key I0324 20:37:31.967502 32984 out.go:129] I0324 20:30:18.980020 8815 certs.go:275] skipping minikube signed cert generation: /root/.minikube/profiles/minikube/apiserver.key.22f1d0ce I0324 20:37:31.968648 32984 out.go:129] I0324 20:30:18.980042 8815 certs.go:275] skipping aggregator signed cert generation: /root/.minikube/profiles/minikube/proxy-client.key I0324 20:37:31.969939 32984 out.go:129] I0324 20:30:18.980145 8815 certs.go:354] found cert: /root/.minikube/certs/root/.minikube/certs/ca-key.pem (1675 bytes) I0324 20:37:31.971161 32984 out.go:129] I0324 20:30:18.980190 8815 certs.go:354] found cert: /root/.minikube/certs/root/.minikube/certs/ca.pem (1074 bytes) I0324 20:37:31.972347 32984 out.go:129] I0324 20:30:18.980221 8815 certs.go:354] found cert: /root/.minikube/certs/root/.minikube/certs/cert.pem (1115 bytes) I0324 20:37:31.973540 32984 out.go:129] I0324 20:30:18.980243 8815 certs.go:354] found cert: /root/.minikube/certs/root/.minikube/certs/key.pem (1675 bytes) I0324 20:37:31.974831 32984 out.go:129] I0324 20:30:18.981247 8815 exec_runner.go:145] found /var/lib/minikube/certs/apiserver.crt, removing ... I0324 20:37:31.975864 32984 out.go:129] I0324 20:30:18.981258 8815 exec_runner.go:190] rm: /var/lib/minikube/certs/apiserver.crt I0324 20:37:31.977116 32984 out.go:129] I0324 20:30:18.981323 8815 exec_runner.go:152] cp: /root/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0324 20:37:31.978122 32984 out.go:129] I0324 20:30:18.981486 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube912654344 /var/lib/minikube/certs/apiserver.crt I0324 20:37:31.979019 32984 out.go:129] I0324 20:30:18.990755 8815 exec_runner.go:145] found /var/lib/minikube/certs/apiserver.key, removing ... I0324 20:37:31.979970 32984 out.go:129] I0324 20:30:18.990766 8815 exec_runner.go:190] rm: /var/lib/minikube/certs/apiserver.key I0324 20:37:31.980857 32984 out.go:129] I0324 20:30:18.990812 8815 exec_runner.go:152] cp: /root/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0324 20:37:31.981825 32984 out.go:129] I0324 20:30:18.991015 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube132322247 /var/lib/minikube/certs/apiserver.key I0324 20:37:31.982662 32984 out.go:129] I0324 20:30:18.999991 8815 exec_runner.go:145] found /var/lib/minikube/certs/proxy-client.crt, removing ... I0324 20:37:31.983551 32984 out.go:129] I0324 20:30:19.000005 8815 exec_runner.go:190] rm: /var/lib/minikube/certs/proxy-client.crt I0324 20:37:31.984545 32984 out.go:129] I0324 20:30:19.000068 8815 exec_runner.go:152] cp: /root/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0324 20:37:31.985339 32984 out.go:129] I0324 20:30:19.000217 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube858219642 /var/lib/minikube/certs/proxy-client.crt I0324 20:37:31.986246 32984 out.go:129] I0324 20:30:19.008885 8815 exec_runner.go:145] found /var/lib/minikube/certs/proxy-client.key, removing ... I0324 20:37:31.987422 32984 out.go:129] I0324 20:30:19.008897 8815 exec_runner.go:190] rm: /var/lib/minikube/certs/proxy-client.key I0324 20:37:31.988593 32984 out.go:129] I0324 20:30:19.008972 8815 exec_runner.go:152] cp: /root/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0324 20:37:31.989447 32984 out.go:129] I0324 20:30:19.009141 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube674552721 /var/lib/minikube/certs/proxy-client.key I0324 20:37:31.990485 32984 out.go:129] I0324 20:30:19.017779 8815 exec_runner.go:145] found /var/lib/minikube/certs/ca.crt, removing ... I0324 20:37:31.991519 32984 out.go:129] I0324 20:30:19.017797 8815 exec_runner.go:190] rm: /var/lib/minikube/certs/ca.crt I0324 20:37:31.992486 32984 out.go:129] I0324 20:30:19.017841 8815 exec_runner.go:152] cp: /root/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0324 20:37:31.993279 32984 out.go:129] I0324 20:30:19.018032 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube725471164 /var/lib/minikube/certs/ca.crt I0324 20:37:31.994094 32984 out.go:129] I0324 20:30:19.027576 8815 exec_runner.go:145] found /var/lib/minikube/certs/ca.key, removing ... I0324 20:37:31.994910 32984 out.go:129] I0324 20:30:19.027592 8815 exec_runner.go:190] rm: /var/lib/minikube/certs/ca.key I0324 20:37:31.995669 32984 out.go:129] I0324 20:30:19.027659 8815 exec_runner.go:152] cp: /root/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes) I0324 20:37:31.996472 32984 out.go:129] I0324 20:30:19.027913 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube862186475 /var/lib/minikube/certs/ca.key I0324 20:37:31.997296 32984 out.go:129] I0324 20:30:19.036545 8815 exec_runner.go:145] found /var/lib/minikube/certs/proxy-client-ca.crt, removing ... I0324 20:37:31.998152 32984 out.go:129] I0324 20:30:19.036556 8815 exec_runner.go:190] rm: /var/lib/minikube/certs/proxy-client-ca.crt I0324 20:37:31.999212 32984 out.go:129] I0324 20:30:19.036609 8815 exec_runner.go:152] cp: /root/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0324 20:37:32.000610 32984 out.go:129] I0324 20:30:19.036799 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube993983566 /var/lib/minikube/certs/proxy-client-ca.crt I0324 20:37:32.001627 32984 out.go:129] I0324 20:30:19.045561 8815 exec_runner.go:145] found /var/lib/minikube/certs/proxy-client-ca.key, removing ... I0324 20:37:32.002503 32984 out.go:129] I0324 20:30:19.045572 8815 exec_runner.go:190] rm: /var/lib/minikube/certs/proxy-client-ca.key I0324 20:37:32.003320 32984 out.go:129] I0324 20:30:19.045623 8815 exec_runner.go:152] cp: /root/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0324 20:37:32.004157 32984 out.go:129] I0324 20:30:19.045799 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube581369173 /var/lib/minikube/certs/proxy-client-ca.key I0324 20:37:32.005132 32984 out.go:129] I0324 20:30:19.055172 8815 exec_runner.go:145] found /usr/share/ca-certificates/minikubeCA.pem, removing ... I0324 20:37:32.006125 32984 out.go:129] I0324 20:30:19.055192 8815 exec_runner.go:190] rm: /usr/share/ca-certificates/minikubeCA.pem I0324 20:37:32.006997 32984 out.go:129] I0324 20:30:19.055271 8815 exec_runner.go:152] cp: /root/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0324 20:37:32.007926 32984 out.go:129] I0324 20:30:19.055489 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube368697520 /usr/share/ca-certificates/minikubeCA.pem I0324 20:37:32.008756 32984 out.go:129] I0324 20:30:19.064898 8815 exec_runner.go:145] found /var/lib/minikube/kubeconfig, removing ... I0324 20:37:32.009604 32984 out.go:129] I0324 20:30:19.064913 8815 exec_runner.go:190] rm: /var/lib/minikube/kubeconfig I0324 20:37:32.010609 32984 out.go:129] I0324 20:30:19.064971 8815 exec_runner.go:152] cp: memory --> /var/lib/minikube/kubeconfig (744 bytes) I0324 20:37:32.011634 32984 out.go:129] I0324 20:30:19.065187 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube045457999 /var/lib/minikube/kubeconfig I0324 20:37:32.012412 32984 out.go:129] I0324 20:30:19.074963 8815 exec_runner.go:52] Run: openssl version I0324 20:37:32.013360 32984 out.go:129] I0324 20:30:19.078834 8815 exec_runner.go:52] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0324 20:37:32.014307 32984 out.go:129] I0324 20:30:19.088424 8815 exec_runner.go:52] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0324 20:37:32.015102 32984 out.go:129] I0324 20:30:19.090195 8815 certs.go:395] hashing: -rw-r--r-- 1 root root 1111 Mar 24 20:30 /usr/share/ca-certificates/minikubeCA.pem I0324 20:37:32.015828 32984 out.go:129] I0324 20:30:19.090230 8815 exec_runner.go:52] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0324 20:37:32.016590 32984 out.go:129] I0324 20:30:19.094059 8815 exec_runner.go:52] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0324 20:37:32.017367 32984 out.go:129] I0324 20:30:19.102725 8815 kubeadm.go:385] StartCluster: {Name:minikube KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e Memory:4900 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP:192.168.1.100 LoadBalancerEndIP:192.168.1.254 CustomIngressCert: ExtraOptions:[{Component:kubelet Key:max-pods Value:1000} {Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:192.168.1.6 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:true metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] Network: MultiNodeRequested:false} I0324 20:37:32.018354 32984 out.go:129] I0324 20:30:19.102861 8815 exec_runner.go:52] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format= I0324 20:37:32.019115 32984 out.go:129] I0324 20:30:19.154814 8815 exec_runner.go:52] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0324 20:37:32.019866 32984 out.go:129] I0324 20:30:19.163416 8815 kubeadm.go:396] found existing configuration files, will attempt cluster restart I0324 20:37:32.020590 32984 out.go:129] I0324 20:30:19.163427 8815 kubeadm.go:594] restartCluster start I0324 20:37:32.021293 32984 out.go:129] I0324 20:30:19.163470 8815 exec_runner.go:52] Run: sudo test -d /data/minikube I0324 20:37:32.022026 32984 out.go:129] I0324 20:30:19.171704 8815 kubeadm.go:125] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: exit status 1 I0324 20:37:32.022860 32984 out.go:129] stdout: I0324 20:37:32.023685 32984 out.go:129] I0324 20:37:32.024638 32984 out.go:129] stderr: I0324 20:37:32.025567 32984 out.go:129] I0324 20:30:19.171899 8815 kubeconfig.go:117] verify returned: extract IP: "minikube" does not appear in /root/.kube/config I0324 20:37:32.026658 32984 out.go:129] I0324 20:30:19.171971 8815 kubeconfig.go:128] "minikube" context is missing from /root/.kube/config - will repair! I0324 20:37:32.027803 32984 out.go:129] I0324 20:30:19.172250 8815 lock.go:36] WriteFile acquiring /root/.kube/config: {Name:mk72a1487fd2da23da9e8181e16f352a6105bd56 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0324 20:37:32.029015 32984 out.go:129] I0324 20:30:19.174979 8815 exec_runner.go:52] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0324 20:37:32.029809 32984 out.go:129] I0324 20:30:19.183530 8815 api_server.go:146] Checking apiserver status ... I0324 20:37:32.030809 32984 out.go:129] I0324 20:30:19.183574 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.031672 32984 out.go:129] W0324 20:30:19.292295 8815 api_server.go:150] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: exit status 1 I0324 20:37:32.032698 32984 out.go:129] stdout: I0324 20:37:32.033814 32984 out.go:129] I0324 20:37:32.034957 32984 out.go:129] stderr: I0324 20:37:32.036000 32984 out.go:129] I0324 20:30:19.292305 8815 kubeadm.go:573] needs reconfigure: apiserver in state Stopped I0324 20:37:32.037018 32984 out.go:129] I0324 20:30:19.292312 8815 kubeadm.go:1042] stopping kube-system containers ... I0324 20:37:32.038082 32984 out.go:129] I0324 20:30:19.292353 8815 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format= I0324 20:37:32.039691 32984 out.go:129] I0324 20:30:19.348819 8815 docker.go:261] Stopping containers: [7c29d30ec36b 36f77783d65d c7559741604d ac922472f26d cc5c2c86a143 d084b5065d92 4931f4f91246 1ac197aff9c0 f0f1229795c4 319114f65502 1adec2628343 bc1799000965 5398f52e525d b345ee020b12 fb22a8fb9dc7 4310b011d7c2 bf32fcfec1dc 5eb6877446df 710e3b2ca3ac 7057940755f4 e9dfecf566ba ef12786d2d9b 1a7327d5820a e273bcfd2125 a2c1b692a641] I0324 20:37:32.040817 32984 out.go:129] I0324 20:30:19.348879 8815 exec_runner.go:52] Run: docker stop 7c29d30ec36b 36f77783d65d c7559741604d ac922472f26d cc5c2c86a143 d084b5065d92 4931f4f91246 1ac197aff9c0 f0f1229795c4 319114f65502 1adec2628343 bc1799000965 5398f52e525d b345ee020b12 fb22a8fb9dc7 4310b011d7c2 bf32fcfec1dc 5eb6877446df 710e3b2ca3ac 7057940755f4 e9dfecf566ba ef12786d2d9b 1a7327d5820a e273bcfd2125 a2c1b692a641 I0324 20:37:32.042095 32984 out.go:129] I0324 20:30:19.403235 8815 exec_runner.go:52] Run: sudo systemctl stop kubelet I0324 20:37:32.043221 32984 out.go:129] I0324 20:30:19.420979 8815 exec_runner.go:52] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0324 20:37:32.044541 32984 out.go:129] I0324 20:30:19.430118 8815 kubeadm.go:153] found existing configuration files: I0324 20:37:32.045621 32984 out.go:129] -rw------- 1 root root 5611 Mar 22 08:11 /etc/kubernetes/admin.conf I0324 20:37:32.046735 32984 out.go:129] -rw------- 1 root root 5627 Mar 24 20:29 /etc/kubernetes/controller-manager.conf I0324 20:37:32.047662 32984 out.go:129] -rw------- 1 root root 1963 Mar 22 08:11 /etc/kubernetes/kubelet.conf I0324 20:37:32.048451 32984 out.go:129] -rw------- 1 root root 5575 Mar 24 20:29 /etc/kubernetes/scheduler.conf I0324 20:37:32.049266 32984 out.go:129] I0324 20:37:32.049993 32984 out.go:129] I0324 20:30:19.430164 8815 exec_runner.go:52] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0324 20:37:32.050837 32984 out.go:129] I0324 20:30:19.438623 8815 exec_runner.go:52] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0324 20:37:32.051609 32984 out.go:129] I0324 20:30:19.447275 8815 exec_runner.go:52] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0324 20:37:32.052354 32984 out.go:129] I0324 20:30:19.455886 8815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: exit status 1 I0324 20:37:32.053063 32984 out.go:129] stdout: I0324 20:37:32.053815 32984 out.go:129] I0324 20:37:32.054639 32984 out.go:129] stderr: I0324 20:37:32.055403 32984 out.go:129] I0324 20:30:19.455937 8815 exec_runner.go:52] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0324 20:37:32.056133 32984 out.go:129] I0324 20:30:19.464305 8815 exec_runner.go:52] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0324 20:37:32.056918 32984 out.go:129] I0324 20:30:19.472903 8815 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: exit status 1 I0324 20:37:32.057637 32984 out.go:129] stdout: I0324 20:37:32.058486 32984 out.go:129] I0324 20:37:32.059277 32984 out.go:129] stderr: I0324 20:37:32.060237 32984 out.go:129] I0324 20:30:19.472953 8815 exec_runner.go:52] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0324 20:37:32.061110 32984 out.go:129] I0324 20:30:19.481163 8815 exec_runner.go:52] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0324 20:37:32.061859 32984 out.go:129] I0324 20:30:19.490574 8815 kubeadm.go:670] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml I0324 20:37:32.063153 32984 out.go:129] I0324 20:30:19.490587 8815 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0324 20:37:32.064236 32984 out.go:129] I0324 20:30:19.690764 8815 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0324 20:37:32.065185 32984 out.go:129] I0324 20:30:20.643018 8815 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0324 20:37:32.065989 32984 out.go:129] I0324 20:30:21.140113 8815 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0324 20:37:32.067003 32984 out.go:129] I0324 20:30:21.347532 8815 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0324 20:37:32.068095 32984 out.go:129] I0324 20:30:21.554179 8815 api_server.go:48] waiting for apiserver process to appear ... I0324 20:37:32.069021 32984 out.go:129] I0324 20:30:21.554224 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.070305 32984 out.go:129] I0324 20:30:22.161059 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.071777 32984 out.go:129] I0324 20:30:22.660905 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.072891 32984 out.go:129] I0324 20:30:23.161743 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.073810 32984 out.go:129] I0324 20:30:23.661873 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.074863 32984 out.go:129] I0324 20:30:24.161238 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.075796 32984 out.go:129] I0324 20:30:24.661935 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.076699 32984 out.go:129] I0324 20:30:25.161966 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.077734 32984 out.go:129] I0324 20:30:25.661382 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.078713 32984 out.go:129] I0324 20:30:26.160959 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.079685 32984 out.go:129] I0324 20:30:26.662105 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.080470 32984 out.go:129] I0324 20:30:27.161403 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.081560 32984 out.go:129] I0324 20:30:27.662228 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.082620 32984 out.go:129] I0324 20:30:28.161314 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.083488 32984 out.go:129] I0324 20:30:28.661046 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.084512 32984 out.go:129] I0324 20:30:29.161764 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.085393 32984 out.go:129] I0324 20:30:29.661132 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.086282 32984 out.go:129] I0324 20:30:30.161632 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.087196 32984 out.go:129] I0324 20:30:30.661037 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.088142 32984 out.go:129] I0324 20:31:00.661576 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.089174 32984 out.go:129] I0324 20:31:01.161328 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.090041 32984 out.go:129] I0324 20:31:01.661445 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.090756 32984 out.go:129] I0324 20:31:02.161824 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.091502 32984 out.go:129] I0324 20:31:02.662101 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.092345 32984 out.go:129] I0324 20:31:03.161221 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.093290 32984 out.go:129] I0324 20:31:03.661872 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.094382 32984 out.go:129] I0324 20:31:04.162554 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.095401 32984 out.go:129] I0324 20:31:04.661544 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.096697 32984 out.go:129] I0324 20:31:05.162441 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.097808 32984 out.go:129] I0324 20:31:05.662318 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.098934 32984 out.go:129] I0324 20:31:31.662813 8815 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kube-apiserver --format= I0324 20:37:32.099707 32984 out.go:129] I0324 20:31:31.730152 8815 logs.go:255] 1 containers: [c9b0c45237e1] I0324 20:37:32.100578 32984 out.go:129] I0324 20:31:31.730208 8815 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_etcd --format= I0324 20:37:32.101702 32984 out.go:129] I0324 20:31:31.813237 8815 logs.go:255] 2 containers: [e538b5fe9dee 7057940755f4] I0324 20:37:32.102947 32984 out.go:129] I0324 20:31:31.813317 8815 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_coredns --format= I0324 20:37:32.104277 32984 out.go:129] I0324 20:31:31.892111 8815 logs.go:255] 1 containers: [b345ee020b12] I0324 20:37:32.106803 32984 out.go:129] I0324 20:31:31.892177 8815 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kube-scheduler --format= I0324 20:37:32.108074 32984 out.go:129] I0324 20:31:31.954511 8815 logs.go:255] 2 containers: [5217587868e2 e273bcfd2125] I0324 20:37:32.109003 32984 out.go:129] I0324 20:31:31.954574 8815 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kube-proxy --format= I0324 20:37:32.109838 32984 out.go:129] I0324 20:31:32.024800 8815 logs.go:255] 1 containers: [fb22a8fb9dc7] I0324 20:37:32.110744 32984 out.go:129] I0324 20:31:32.024886 8815 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format= I0324 20:37:32.111555 32984 out.go:129] I0324 20:31:32.085898 8815 logs.go:255] 0 containers: [] I0324 20:37:32.112291 32984 out.go:129] W0324 20:31:32.085916 8815 logs.go:257] No container was found matching "kubernetes-dashboard" I0324 20:37:32.113119 32984 out.go:129] I0324 20:31:32.085978 8815 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_storage-provisioner --format= I0324 20:37:32.114361 32984 out.go:129] I0324 20:31:32.172973 8815 logs.go:255] 0 containers: [] I0324 20:37:32.115046 32984 out.go:129] W0324 20:31:32.173005 8815 logs.go:257] No container was found matching "storage-provisioner" I0324 20:37:32.115766 32984 out.go:129] I0324 20:31:32.173050 8815 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format= I0324 20:37:32.116464 32984 out.go:129] I0324 20:31:32.235293 8815 logs.go:255] 2 containers: [eb6f023ce022 710e3b2ca3ac] I0324 20:37:32.117108 32984 out.go:129] I0324 20:31:32.235332 8815 logs.go:122] Gathering logs for dmesg ... I0324 20:37:32.117797 32984 out.go:129] I0324 20:31:32.235344 8815 exec_runner.go:52] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400" I0324 20:37:32.118636 32984 out.go:129] I0324 20:31:32.259567 8815 logs.go:122] Gathering logs for kube-scheduler [e273bcfd2125] ... I0324 20:37:32.119289 32984 out.go:129] I0324 20:31:32.259587 8815 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 400 e273bcfd2125" I0324 20:37:32.119910 32984 out.go:129] I0324 20:31:32.386580 8815 logs.go:122] Gathering logs for kube-controller-manager [710e3b2ca3ac] ... I0324 20:37:32.120628 32984 out.go:129] I0324 20:31:32.386607 8815 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 400 710e3b2ca3ac" I0324 20:37:32.121273 32984 out.go:129] I0324 20:31:32.630143 8815 logs.go:122] Gathering logs for coredns [b345ee020b12] ... I0324 20:37:32.121993 32984 out.go:129] I0324 20:31:32.630172 8815 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 400 b345ee020b12" I0324 20:37:32.122661 32984 out.go:129] I0324 20:31:32.802249 8815 logs.go:122] Gathering logs for kube-scheduler [5217587868e2] ... I0324 20:37:32.123342 32984 out.go:129] I0324 20:31:32.802271 8815 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 400 5217587868e2" I0324 20:37:32.123980 32984 out.go:129] I0324 20:31:32.870176 8815 logs.go:122] Gathering logs for Docker ... I0324 20:37:32.124612 32984 out.go:129] I0324 20:31:32.870197 8815 exec_runner.go:52] Run: /bin/bash -c "sudo journalctl -u docker -n 400" I0324 20:37:32.125195 32984 out.go:129] I0324 20:31:32.946083 8815 logs.go:122] Gathering logs for kubelet ... I0324 20:37:32.125846 32984 out.go:129] I0324 20:31:32.946106 8815 exec_runner.go:52] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400" I0324 20:37:32.126507 32984 out.go:129] I0324 20:31:33.003385 8815 logs.go:122] Gathering logs for describe nodes ... I0324 20:37:32.127131 32984 out.go:129] I0324 20:31:33.003404 8815 exec_runner.go:52] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" I0324 20:37:32.127772 32984 out.go:129] I0324 20:31:37.613572 8815 exec_runner.go:85] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (4.610132985s) I0324 20:37:32.128398 32984 out.go:129] I0324 20:31:37.621261 8815 logs.go:122] Gathering logs for kube-apiserver [c9b0c45237e1] ... I0324 20:37:32.129063 32984 out.go:129] I0324 20:31:37.621280 8815 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 400 c9b0c45237e1" I0324 20:37:32.129723 32984 out.go:129] I0324 20:31:37.695428 8815 logs.go:122] Gathering logs for etcd [e538b5fe9dee] ... I0324 20:37:32.130519 32984 out.go:129] I0324 20:31:37.695445 8815 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 400 e538b5fe9dee" I0324 20:37:32.131220 32984 out.go:129] I0324 20:31:37.755674 8815 logs.go:122] Gathering logs for etcd [7057940755f4] ... I0324 20:37:32.131971 32984 out.go:129] I0324 20:31:37.755691 8815 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 400 7057940755f4" I0324 20:37:32.132771 32984 out.go:129] I0324 20:31:37.836068 8815 logs.go:122] Gathering logs for kube-proxy [fb22a8fb9dc7] ... I0324 20:37:32.133654 32984 out.go:129] I0324 20:31:37.836089 8815 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 400 fb22a8fb9dc7" I0324 20:37:32.134848 32984 out.go:129] W0324 20:31:37.900644 8815 logs.go:129] failed kube-proxy [fb22a8fb9dc7]: command: /bin/bash -c "docker logs --tail 400 fb22a8fb9dc7" /bin/bash -c "docker logs --tail 400 fb22a8fb9dc7": exit status 1 I0324 20:37:32.135908 32984 out.go:129] stdout: I0324 20:37:32.136844 32984 out.go:129] I0324 20:37:32.137859 32984 out.go:129] stderr: I0324 20:37:32.138783 32984 out.go:129] Error: No such container: fb22a8fb9dc7 I0324 20:37:32.139644 32984 out.go:129] output: I0324 20:37:32.140587 32984 out.go:129] ** stderr ** I0324 20:37:32.141580 32984 out.go:129] Error: No such container: fb22a8fb9dc7 I0324 20:37:32.142448 32984 out.go:129] I0324 20:37:32.143401 32984 out.go:129] ** /stderr ** I0324 20:37:32.144162 32984 out.go:129] I0324 20:31:37.900664 8815 logs.go:122] Gathering logs for kube-controller-manager [eb6f023ce022] ... I0324 20:37:32.144948 32984 out.go:129] I0324 20:31:37.900675 8815 exec_runner.go:52] Run: /bin/bash -c "docker logs --tail 400 eb6f023ce022" I0324 20:37:32.145716 32984 out.go:129] I0324 20:31:37.952709 8815 logs.go:122] Gathering logs for container status ... I0324 20:37:32.146649 32984 out.go:129] I0324 20:31:37.952726 8815 exec_runner.go:52] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" I0324 20:37:32.147413 32984 out.go:129] I0324 20:31:40.527081 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.148124 32984 out.go:129] I0324 20:32:10.244653 8815 api_server.go:68] duration metric: took 1m48.690468142s to wait for apiserver process to appear ... I0324 20:37:32.148815 32984 out.go:129] I0324 20:32:10.244678 8815 api_server.go:84] waiting for apiserver healthz status ... I0324 20:37:32.149525 32984 out.go:129] I0324 20:32:10.244691 8815 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.150306 32984 out.go:129] I0324 20:32:10.260159 8815 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.150987 32984 out.go:129] ok I0324 20:37:32.151667 32984 out.go:129] I0324 20:32:10.273628 8815 api_server.go:137] control plane version: v1.20.2 I0324 20:37:32.152333 32984 out.go:129] I0324 20:32:10.273651 8815 api_server.go:127] duration metric: took 28.964997ms to wait for apiserver health ... I0324 20:37:32.152986 32984 out.go:129] I0324 20:32:10.273662 8815 cni.go:74] Creating CNI manager for "" I0324 20:37:32.153649 32984 out.go:129] I0324 20:32:10.273669 8815 cni.go:129] Driver none used, CNI unnecessary in this configuration, recommending no CNI I0324 20:37:32.154347 32984 out.go:129] I0324 20:32:10.273678 8815 system_pods.go:41] waiting for kube-system pods to appear ... I0324 20:37:32.155014 32984 out.go:129] I0324 20:32:10.298564 8815 system_pods.go:57] 7 kube-system pods found I0324 20:37:32.155668 32984 out.go:129] I0324 20:32:10.298590 8815 system_pods.go:59] "coredns-74ff55c5b-nwnzj" [d11c7b8e-7df1-40b5-ac42-4e63a4ddf81c] Running I0324 20:37:32.156330 32984 out.go:129] I0324 20:32:10.298602 8815 system_pods.go:59] "etcd-ubuntu" [63acfffb-300a-463c-a36d-cf683ef79b4d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0324 20:37:32.156999 32984 out.go:129] I0324 20:32:10.298620 8815 system_pods.go:59] "kube-apiserver-ubuntu" [2763368d-fa5b-4fea-bd78-b4221ebd0d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0324 20:37:32.157718 32984 out.go:129] I0324 20:32:10.298631 8815 system_pods.go:59] "kube-controller-manager-ubuntu" [d1f0e063-bd01-4410-8b59-c847d6616885] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0324 20:37:32.158535 32984 out.go:129] I0324 20:32:10.298638 8815 system_pods.go:59] "kube-proxy-hw57j" [fb4dfa7c-728b-477e-82d1-cd9fe1aa3d51] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy]) I0324 20:37:32.159238 32984 out.go:129] I0324 20:32:10.298647 8815 system_pods.go:59] "kube-scheduler-ubuntu" [49d6672a-86b2-4df1-bf46-c3ce87519d11] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0324 20:37:32.159945 32984 out.go:129] I0324 20:32:10.298654 8815 system_pods.go:59] "storage-provisioner" [bdb93406-7f9d-4e5e-91be-300b1353683e] Running I0324 20:37:32.161072 32984 out.go:129] I0324 20:32:10.298661 8815 system_pods.go:72] duration metric: took 24.976595ms to wait for pod list to return data ... I0324 20:37:32.162089 32984 out.go:129] I0324 20:32:10.298669 8815 node_conditions.go:101] verifying NodePressure condition ... I0324 20:37:32.163107 32984 out.go:129] I0324 20:32:10.307184 8815 node_conditions.go:121] node storage ephemeral capacity is 153250288Ki I0324 20:37:32.163888 32984 out.go:129] I0324 20:32:10.307208 8815 node_conditions.go:122] node cpu capacity is 8 I0324 20:37:32.164803 32984 out.go:129] I0324 20:32:10.307223 8815 node_conditions.go:104] duration metric: took 8.54875ms to run NodePressure ... I0324 20:37:32.165927 32984 out.go:129] I0324 20:32:10.307243 8815 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0324 20:37:32.167194 32984 out.go:129] I0324 20:32:38.011995 8815 exec_runner.go:85] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (27.704707306s) I0324 20:37:32.168200 32984 out.go:129] I0324 20:32:38.012028 8815 retry.go:31] will retry after 110.466µs: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": exit status 1 I0324 20:37:32.169046 32984 out.go:129] stdout: I0324 20:37:32.169915 32984 out.go:129] I0324 20:37:32.170774 32984 out.go:129] stderr: I0324 20:37:32.171511 32984 out.go:129] error execution phase addon/coredns: couldn't retrieve DNS addon deployments: Get "https://control-plane.minikube.internal:8443/apis/apps/v1/namespaces/kube-system/deployments?labelSelector=k8s-app%3Dkube-dns": net/http: request canceled (Client.Timeout exceeded while awaiting headers) I0324 20:37:32.172282 32984 out.go:129] To see the stack trace of this error execute with --v=5 or higher I0324 20:37:32.172931 32984 out.go:129] I0324 20:32:38.013775 8815 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0324 20:37:32.173714 32984 out.go:129] I0324 20:32:58.875169 8815 exec_runner.go:85] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (20.861365916s) I0324 20:37:32.174513 32984 out.go:129] I0324 20:32:58.875187 8815 exec_runner.go:52] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0324 20:37:32.175413 32984 out.go:129] I0324 20:32:59.125437 8815 ops.go:34] apiserver oom_adj: -16 I0324 20:37:32.176313 32984 out.go:129] I0324 20:32:59.125450 8815 kubeadm.go:598] restartCluster took 2m39.962017779s I0324 20:37:32.177266 32984 out.go:129] I0324 20:32:59.125459 8815 kubeadm.go:387] StartCluster complete in 2m40.022744029s I0324 20:37:32.178121 32984 out.go:129] I0324 20:32:59.125478 8815 settings.go:142] acquiring lock: {Name:mk19004591210340446308469f521c5cfa3e1599 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0324 20:37:32.178948 32984 out.go:129] I0324 20:32:59.125895 8815 settings.go:150] Updating kubeconfig: /root/.kube/config I0324 20:37:32.179737 32984 out.go:129] I0324 20:32:59.127714 8815 lock.go:36] WriteFile acquiring /root/.kube/config: {Name:mk72a1487fd2da23da9e8181e16f352a6105bd56 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0324 20:37:32.180624 32984 out.go:129] I0324 20:32:59.136399 8815 out.go:129] 🤹 Configuring local host environment ... I0324 20:37:32.181379 32984 out.go:129] W0324 20:32:59.136576 8815 out.go:191] I0324 20:37:32.182108 32984 out.go:129] W0324 20:32:59.136688 8815 out.go:191] ❗ The 'none' driver is designed for experts who need to integrate with an existing VM I0324 20:37:32.182840 32984 out.go:129] W0324 20:32:59.136752 8815 out.go:191] 💡 Most users should use the newer 'docker' driver instead, which does not require root! I0324 20:37:32.183707 32984 out.go:129] W0324 20:32:59.136820 8815 out.go:191] 📘 For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/ I0324 20:37:32.184421 32984 out.go:129] W0324 20:32:59.136909 8815 out.go:191] I0324 20:37:32.185073 32984 out.go:129] W0324 20:32:59.137045 8815 out.go:191] ❗ kubectl and minikube configuration will be stored in /root I0324 20:37:32.185739 32984 out.go:129] W0324 20:32:59.137166 8815 out.go:191] ❗ To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: I0324 20:37:32.186447 32984 out.go:129] W0324 20:32:59.137250 8815 out.go:191] I0324 20:37:32.187083 32984 out.go:129] W0324 20:32:59.137397 8815 out.go:191] ▪ sudo mv /root/.kube /root/.minikube $HOME I0324 20:37:32.187731 32984 out.go:129] W0324 20:32:59.137505 8815 out.go:191] ▪ sudo chown -R $USER $HOME/.kube $HOME/.minikube I0324 20:37:32.188408 32984 out.go:129] W0324 20:32:59.137621 8815 out.go:191] I0324 20:37:32.189232 32984 out.go:129] W0324 20:32:59.137703 8815 out.go:191] 💡 This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true I0324 20:37:32.190356 32984 out.go:129] I0324 20:32:59.137763 8815 start.go:202] Will wait 6m0s for node up to I0324 20:37:32.191470 32984 out.go:129] I0324 20:32:59.129872 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl scale deployment --replicas=1 coredns -n=kube-system I0324 20:37:32.192398 32984 out.go:129] I0324 20:32:59.129866 8815 addons.go:381] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:true metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[] I0324 20:37:32.193493 32984 out.go:129] I0324 20:32:59.143834 8815 addons.go:58] Setting storage-provisioner=true in profile "minikube" I0324 20:37:32.194948 32984 out.go:129] I0324 20:32:59.143862 8815 addons.go:134] Setting addon storage-provisioner=true in "minikube" I0324 20:37:32.196099 32984 out.go:129] W0324 20:32:59.143867 8815 addons.go:143] addon storage-provisioner should already be in state true I0324 20:37:32.197187 32984 out.go:129] I0324 20:32:59.143883 8815 host.go:66] Checking if "minikube" exists ... I0324 20:37:32.198436 32984 out.go:129] I0324 20:32:59.145129 8815 kubeconfig.go:93] found "minikube" server: "https://192.168.1.6:8443" I0324 20:37:32.199536 32984 out.go:129] I0324 20:32:59.145140 8815 api_server.go:146] Checking apiserver status ... I0324 20:37:32.200734 32984 out.go:129] I0324 20:32:59.145173 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.201838 32984 out.go:129] I0324 20:32:59.146102 8815 out.go:129] 🔎 Verifying Kubernetes components... I0324 20:37:32.202907 32984 out.go:129] I0324 20:32:59.146165 8815 addons.go:58] Setting default-storageclass=true in profile "minikube" I0324 20:37:32.203979 32984 out.go:129] I0324 20:32:59.146186 8815 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0324 20:37:32.204990 32984 out.go:129] I0324 20:32:59.146537 8815 addons.go:58] Setting metallb=true in profile "minikube" I0324 20:37:32.205874 32984 out.go:129] I0324 20:32:59.146549 8815 addons.go:134] Setting addon metallb=true in "minikube" I0324 20:37:32.206910 32984 out.go:129] W0324 20:32:59.146555 8815 addons.go:143] addon metallb should already be in state true I0324 20:37:32.208102 32984 out.go:129] I0324 20:32:59.146566 8815 host.go:66] Checking if "minikube" exists ... I0324 20:37:32.208985 32984 out.go:129] I0324 20:32:59.147790 8815 kubeconfig.go:93] found "minikube" server: "https://192.168.1.6:8443" I0324 20:37:32.209801 32984 out.go:129] I0324 20:32:59.147838 8815 api_server.go:146] Checking apiserver status ... I0324 20:37:32.210861 32984 out.go:129] I0324 20:32:59.147842 8815 kubeconfig.go:93] found "minikube" server: "https://192.168.1.6:8443" I0324 20:37:32.211867 32984 out.go:129] I0324 20:32:59.147852 8815 api_server.go:146] Checking apiserver status ... I0324 20:37:32.212711 32984 out.go:129] I0324 20:32:59.147882 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.213537 32984 out.go:129] I0324 20:32:59.147885 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.214273 32984 out.go:129] I0324 20:32:59.148180 8815 exec_runner.go:52] Run: sudo systemctl is-active --quiet service kubelet I0324 20:37:32.215073 32984 out.go:129] I0324 20:32:59.230535 8815 api_server.go:48] waiting for apiserver process to appear ... I0324 20:37:32.215872 32984 out.go:129] I0324 20:32:59.230611 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.216594 32984 out.go:129] I0324 20:32:59.638572 8815 exec_runner.go:52] Run: sudo egrep ^[0-9]+:freezer: /proc/13063/cgroup I0324 20:37:32.217321 32984 out.go:129] I0324 20:32:59.702347 8815 api_server.go:162] apiserver freezer: "2:freezer:/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/c9b0c45237e1ede262f39b93e72833226b38e510632821a9c1606159ff2bdb2b" I0324 20:37:32.218070 32984 out.go:129] I0324 20:32:59.702406 8815 exec_runner.go:52] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/c9b0c45237e1ede262f39b93e72833226b38e510632821a9c1606159ff2bdb2b/freezer.state I0324 20:37:32.218898 32984 out.go:129] I0324 20:32:59.781936 8815 api_server.go:184] freezer state: "THAWED" I0324 20:37:32.219601 32984 out.go:129] I0324 20:32:59.781961 8815 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.220297 32984 out.go:129] I0324 20:32:59.794417 8815 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.221152 32984 out.go:129] ok I0324 20:37:32.222165 32984 out.go:129] I0324 20:32:59.797974 8815 exec_runner.go:52] Run: sudo egrep ^[0-9]+:freezer: /proc/13063/cgroup I0324 20:37:32.223114 32984 out.go:129] I0324 20:32:59.810162 8815 addons.go:134] Setting addon default-storageclass=true in "minikube" I0324 20:37:32.223887 32984 out.go:129] W0324 20:32:59.810176 8815 addons.go:143] addon default-storageclass should already be in state true I0324 20:37:32.224667 32984 out.go:129] I0324 20:32:59.810189 8815 host.go:66] Checking if "minikube" exists ... I0324 20:37:32.225476 32984 out.go:129] I0324 20:32:59.811579 8815 kubeconfig.go:93] found "minikube" server: "https://192.168.1.6:8443" I0324 20:37:32.226424 32984 out.go:129] I0324 20:32:59.811597 8815 api_server.go:146] Checking apiserver status ... I0324 20:37:32.227756 32984 out.go:129] I0324 20:32:59.811741 8815 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.229100 32984 out.go:129] I0324 20:32:59.898515 8815 exec_runner.go:52] Run: sudo egrep ^[0-9]+:freezer: /proc/13063/cgroup I0324 20:37:32.230240 32984 out.go:129] I0324 20:32:59.936109 8815 api_server.go:162] apiserver freezer: "2:freezer:/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/c9b0c45237e1ede262f39b93e72833226b38e510632821a9c1606159ff2bdb2b" I0324 20:37:32.231141 32984 out.go:129] I0324 20:32:59.936177 8815 exec_runner.go:52] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/c9b0c45237e1ede262f39b93e72833226b38e510632821a9c1606159ff2bdb2b/freezer.state I0324 20:37:32.232398 32984 out.go:129] I0324 20:32:59.947140 8815 api_server.go:68] duration metric: took 809.340402ms to wait for apiserver process to appear ... I0324 20:37:32.233580 32984 out.go:129] I0324 20:32:59.947160 8815 api_server.go:84] waiting for apiserver healthz status ... I0324 20:37:32.234678 32984 out.go:129] I0324 20:32:59.947172 8815 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.235554 32984 out.go:129] I0324 20:32:59.960417 8815 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.236596 32984 out.go:129] ok I0324 20:37:32.237439 32984 out.go:129] I0324 20:32:59.962379 8815 api_server.go:137] control plane version: v1.20.2 I0324 20:37:32.238303 32984 out.go:129] I0324 20:32:59.962390 8815 api_server.go:127] duration metric: took 15.225781ms to wait for apiserver health ... I0324 20:37:32.239117 32984 out.go:129] I0324 20:32:59.962398 8815 system_pods.go:41] waiting for kube-system pods to appear ... I0324 20:37:32.239985 32984 out.go:129] I0324 20:32:59.978459 8815 system_pods.go:57] 7 kube-system pods found I0324 20:37:32.241066 32984 out.go:129] I0324 20:32:59.978479 8815 system_pods.go:59] "coredns-74ff55c5b-nwnzj" [d11c7b8e-7df1-40b5-ac42-4e63a4ddf81c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0324 20:37:32.242212 32984 out.go:129] I0324 20:32:59.978486 8815 system_pods.go:59] "etcd-ubuntu" [63acfffb-300a-463c-a36d-cf683ef79b4d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0324 20:37:32.243261 32984 out.go:129] I0324 20:32:59.978492 8815 system_pods.go:59] "kube-apiserver-ubuntu" [2763368d-fa5b-4fea-bd78-b4221ebd0d86] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0324 20:37:32.244156 32984 out.go:129] I0324 20:32:59.978498 8815 system_pods.go:59] "kube-controller-manager-ubuntu" [d1f0e063-bd01-4410-8b59-c847d6616885] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0324 20:37:32.244952 32984 out.go:129] I0324 20:32:59.978506 8815 system_pods.go:59] "kube-proxy-hw57j" [fb4dfa7c-728b-477e-82d1-cd9fe1aa3d51] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy]) I0324 20:37:32.245714 32984 out.go:129] I0324 20:32:59.978514 8815 system_pods.go:59] "kube-scheduler-ubuntu" [49d6672a-86b2-4df1-bf46-c3ce87519d11] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0324 20:37:32.246613 32984 out.go:129] I0324 20:32:59.978518 8815 system_pods.go:59] "storage-provisioner" [bdb93406-7f9d-4e5e-91be-300b1353683e] Running I0324 20:37:32.247403 32984 out.go:129] I0324 20:32:59.978523 8815 system_pods.go:72] duration metric: took 16.121183ms to wait for pod list to return data ... I0324 20:37:32.248178 32984 out.go:129] I0324 20:32:59.978531 8815 kubeadm.go:541] duration metric: took 840.741486ms to wait for : map[apiserver:true system_pods:true] ... I0324 20:37:32.248973 32984 out.go:129] I0324 20:32:59.978544 8815 node_conditions.go:101] verifying NodePressure condition ... I0324 20:37:32.249707 32984 out.go:129] I0324 20:32:59.987019 8815 node_conditions.go:121] node storage ephemeral capacity is 153250288Ki I0324 20:37:32.250563 32984 out.go:129] I0324 20:32:59.987036 8815 node_conditions.go:122] node cpu capacity is 8 I0324 20:37:32.251280 32984 out.go:129] I0324 20:32:59.987049 8815 node_conditions.go:104] duration metric: took 8.49947ms to run NodePressure ... I0324 20:37:32.252187 32984 out.go:129] I0324 20:32:59.987058 8815 start.go:207] waiting for startup goroutines ... I0324 20:37:32.252926 32984 out.go:129] I0324 20:33:00.024310 8815 api_server.go:184] freezer state: "THAWED" I0324 20:37:32.253678 32984 out.go:129] I0324 20:33:00.024339 8815 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.254802 32984 out.go:129] I0324 20:33:00.034190 8815 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.255803 32984 out.go:129] ok I0324 20:37:32.256741 32984 out.go:129] I0324 20:33:00.039349 8815 out.go:129] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v4 I0324 20:37:32.257605 32984 out.go:129] I0324 20:33:00.039485 8815 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml I0324 20:37:32.258571 32984 out.go:129] I0324 20:33:00.036555 8815 api_server.go:162] apiserver freezer: "2:freezer:/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/c9b0c45237e1ede262f39b93e72833226b38e510632821a9c1606159ff2bdb2b" I0324 20:37:32.259836 32984 out.go:129] I0324 20:33:00.039606 8815 exec_runner.go:52] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/c9b0c45237e1ede262f39b93e72833226b38e510632821a9c1606159ff2bdb2b/freezer.state I0324 20:37:32.261420 32984 out.go:129] I0324 20:33:00.040797 8815 exec_runner.go:145] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ... I0324 20:37:32.262735 32984 out.go:129] I0324 20:33:00.040805 8815 exec_runner.go:190] rm: /etc/kubernetes/addons/storage-provisioner.yaml I0324 20:37:32.264023 32984 out.go:129] I0324 20:33:00.040848 8815 exec_runner.go:152] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0324 20:37:32.265385 32984 out.go:129] I0324 20:33:00.040964 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube759370594 /etc/kubernetes/addons/storage-provisioner.yaml I0324 20:37:32.266498 32984 out.go:129] I0324 20:33:00.155301 8815 api_server.go:184] freezer state: "THAWED" I0324 20:37:32.267516 32984 out.go:129] I0324 20:33:00.155324 8815 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.268468 32984 out.go:129] I0324 20:33:00.163287 8815 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.269399 32984 out.go:129] ok I0324 20:37:32.270461 32984 out.go:129] I0324 20:33:00.170255 8815 out.go:129] ▪ Using image metallb/speaker:v0.8.2 I0324 20:37:32.271663 32984 out.go:129] I0324 20:33:02.603535 913 exec_runner.go:85] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (2m44.164685432s) I0324 20:37:32.272790 32984 out.go:129] I0324 20:33:02.603586 913 exec_runner.go:52] Run: sudo systemctl stop -f kubelet I0324 20:37:32.274264 32984 out.go:129] I0324 20:33:02.770051 913 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format= I0324 20:37:32.275589 32984 out.go:129] W0324 20:33:02.947357 913 kubeadm.go:849] found 14 kube-system containers to stop I0324 20:37:32.276677 32984 out.go:129] I0324 20:33:02.947376 913 docker.go:261] Stopping containers: [f539e8aee96d 667becd48cb9 12fa9a9c0491 bd2c0893c0fb d16462c99b2e 992f89184031 5217587868e2 c9b0c45237e1 e538b5fe9dee eb6f023ce022 83d6297fb312 cdebe7a15ad5 c69d567f902d ecbfc9440cfc] I0324 20:37:32.277582 32984 out.go:129] I0324 20:33:02.947446 913 exec_runner.go:52] Run: docker stop f539e8aee96d 667becd48cb9 12fa9a9c0491 bd2c0893c0fb d16462c99b2e 992f89184031 5217587868e2 c9b0c45237e1 e538b5fe9dee eb6f023ce022 83d6297fb312 cdebe7a15ad5 c69d567f902d ecbfc9440cfc I0324 20:37:32.278515 32984 out.go:129] I0324 20:33:00.173555 8815 out.go:129] ▪ Using image metallb/controller:v0.8.2 I0324 20:37:32.279728 32984 out.go:129] I0324 20:33:00.173910 8815 addons.go:253] installing /etc/kubernetes/addons/metallb.yaml I0324 20:37:32.280620 32984 out.go:129] I0324 20:33:00.173951 8815 exec_runner.go:145] found /etc/kubernetes/addons/metallb.yaml, removing ... I0324 20:37:32.281404 32984 out.go:129] I0324 20:33:00.173957 8815 exec_runner.go:190] rm: /etc/kubernetes/addons/metallb.yaml I0324 20:37:32.282181 32984 out.go:129] I0324 20:33:00.174014 8815 exec_runner.go:152] cp: memory --> /etc/kubernetes/addons/metallb.yaml (5606 bytes) I0324 20:37:32.283007 32984 out.go:129] I0324 20:33:00.174187 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube865072985 /etc/kubernetes/addons/metallb.yaml I0324 20:37:32.283744 32984 out.go:129] I0324 20:33:00.193001 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0324 20:37:32.284449 32984 out.go:129] I0324 20:33:00.350404 8815 addons.go:253] installing /etc/kubernetes/addons/metallb-config.yaml I0324 20:37:32.285235 32984 out.go:129] I0324 20:33:00.350434 8815 exec_runner.go:145] found /etc/kubernetes/addons/metallb-config.yaml, removing ... I0324 20:37:32.285984 32984 out.go:129] I0324 20:33:00.350439 8815 exec_runner.go:190] rm: /etc/kubernetes/addons/metallb-config.yaml I0324 20:37:32.286766 32984 out.go:129] I0324 20:33:00.350509 8815 exec_runner.go:152] cp: memory --> /etc/kubernetes/addons/metallb-config.yaml (217 bytes) I0324 20:37:32.287448 32984 out.go:129] I0324 20:33:00.350776 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube428137700 /etc/kubernetes/addons/metallb-config.yaml I0324 20:37:32.288235 32984 out.go:129] I0324 20:33:00.462703 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/metallb.yaml -f /etc/kubernetes/addons/metallb-config.yaml I0324 20:37:32.289026 32984 out.go:129] I0324 20:33:00.464568 8815 exec_runner.go:85] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl scale deployment --replicas=1 coredns -n=kube-system: (1.318597561s) I0324 20:37:32.289849 32984 out.go:129] I0324 20:33:00.464581 8815 start.go:601] successfully scaled coredns replicas to 1 I0324 20:37:32.290703 32984 out.go:129] I0324 20:33:00.762393 8815 exec_runner.go:52] Run: sudo egrep ^[0-9]+:freezer: /proc/13063/cgroup I0324 20:37:32.291459 32984 out.go:129] I0324 20:33:00.862355 8815 api_server.go:162] apiserver freezer: "2:freezer:/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/c9b0c45237e1ede262f39b93e72833226b38e510632821a9c1606159ff2bdb2b" I0324 20:37:32.292299 32984 out.go:129] I0324 20:33:00.862406 8815 exec_runner.go:52] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/c9b0c45237e1ede262f39b93e72833226b38e510632821a9c1606159ff2bdb2b/freezer.state I0324 20:37:32.293310 32984 out.go:129] I0324 20:33:00.952567 8815 api_server.go:184] freezer state: "THAWED" I0324 20:37:32.294374 32984 out.go:129] I0324 20:33:00.952592 8815 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.295251 32984 out.go:129] I0324 20:33:00.966414 8815 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.296048 32984 out.go:129] ok I0324 20:37:32.297003 32984 out.go:129] I0324 20:33:00.966475 8815 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.297976 32984 out.go:129] I0324 20:33:00.966507 8815 exec_runner.go:145] found /etc/kubernetes/addons/storageclass.yaml, removing ... I0324 20:37:32.298954 32984 out.go:129] I0324 20:33:00.966512 8815 exec_runner.go:190] rm: /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.299813 32984 out.go:129] I0324 20:33:00.966552 8815 exec_runner.go:152] cp: memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0324 20:37:32.300827 32984 out.go:129] I0324 20:33:00.966871 8815 exec_runner.go:52] Run: sudo cp -a /tmp/minikube383975155 /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.301828 32984 out.go:129] I0324 20:33:01.074391 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.302747 32984 out.go:129] I0324 20:33:02.515154 8815 exec_runner.go:85] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.322123271s) I0324 20:37:32.303599 32984 out.go:129] I0324 20:33:02.670872 8815 exec_runner.go:85] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/metallb.yaml -f /etc/kubernetes/addons/metallb-config.yaml: (2.208128582s) I0324 20:37:32.304594 32984 out.go:129] I0324 20:33:02.834268 8815 exec_runner.go:85] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.759847487s) I0324 20:37:32.305727 32984 out.go:129] W0324 20:33:02.834299 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.306921 32984 out.go:129] stdout: I0324 20:37:32.307948 32984 out.go:129] I0324 20:37:32.308809 32984 out.go:129] stderr: I0324 20:37:32.309636 32984 out.go:129] error: error when retrieving current configuration of: I0324 20:37:32.310405 32984 out.go:129] Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass" I0324 20:37:32.311208 32984 out.go:129] Name: "standard", Namespace: "" I0324 20:37:32.312042 32984 out.go:129] from server for: "/etc/kubernetes/addons/storageclass.yaml": Get "https://localhost:8443/apis/storage.k8s.io/v1/storageclasses/standard": dial tcp 127.0.0.1:8443: connect: connection refused I0324 20:37:32.312809 32984 out.go:129] I0324 20:33:02.834312 8815 retry.go:31] will retry after 360.127272ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.313562 32984 out.go:129] stdout: I0324 20:37:32.314413 32984 out.go:129] I0324 20:37:32.315138 32984 out.go:129] stderr: I0324 20:37:32.315863 32984 out.go:129] error: error when retrieving current configuration of: I0324 20:37:32.316620 32984 out.go:129] Resource: "storage.k8s.io/v1, Resource=storageclasses", GroupVersionKind: "storage.k8s.io/v1, Kind=StorageClass" I0324 20:37:32.317331 32984 out.go:129] Name: "standard", Namespace: "" I0324 20:37:32.318083 32984 out.go:129] from server for: "/etc/kubernetes/addons/storageclass.yaml": Get "https://localhost:8443/apis/storage.k8s.io/v1/storageclasses/standard": dial tcp 127.0.0.1:8443: connect: connection refused I0324 20:37:32.318843 32984 out.go:129] I0324 20:33:03.195609 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.319538 32984 out.go:129] W0324 20:33:03.957242 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.320266 32984 out.go:129] stdout: I0324 20:37:32.321088 32984 out.go:129] I0324 20:37:32.321998 32984 out.go:129] stderr: I0324 20:37:32.322878 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.323789 32984 out.go:129] I0324 20:33:03.957263 8815 retry.go:31] will retry after 436.71002ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.324890 32984 out.go:129] stdout: I0324 20:37:32.326083 32984 out.go:129] I0324 20:37:32.327358 32984 out.go:129] stderr: I0324 20:37:32.328547 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.329736 32984 out.go:129] I0324 20:33:04.395229 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.330900 32984 out.go:129] W0324 20:33:04.733472 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.332484 32984 out.go:129] stdout: I0324 20:37:32.333620 32984 out.go:129] I0324 20:37:32.334640 32984 out.go:129] stderr: I0324 20:37:32.335482 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.336356 32984 out.go:129] I0324 20:33:04.733494 8815 retry.go:31] will retry after 527.46423ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.337254 32984 out.go:129] stdout: I0324 20:37:32.338235 32984 out.go:129] I0324 20:37:32.339208 32984 out.go:129] stderr: I0324 20:37:32.340016 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.340827 32984 out.go:129] I0324 20:33:06.709124 913 exec_runner.go:85] Completed: docker stop f539e8aee96d 667becd48cb9 12fa9a9c0491 bd2c0893c0fb d16462c99b2e 992f89184031 5217587868e2 c9b0c45237e1 e538b5fe9dee eb6f023ce022 83d6297fb312 cdebe7a15ad5 c69d567f902d ecbfc9440cfc: (3.761627104s) I0324 20:37:32.341742 32984 out.go:129] I0324 20:33:06.709290 913 exec_runner.go:52] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0324 20:37:32.342617 32984 out.go:129] I0324 20:33:06.774142 913 exec_runner.go:52] Run: docker version --format I0324 20:37:32.343488 32984 out.go:129] I0324 20:33:07.234862 913 exec_runner.go:52] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0324 20:37:32.344377 32984 out.go:129] I0324 20:33:07.380796 913 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2 I0324 20:37:32.345151 32984 out.go:129] stdout: I0324 20:37:32.345938 32984 out.go:129] I0324 20:37:32.346765 32984 out.go:129] stderr: I0324 20:37:32.347482 32984 out.go:129] ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory I0324 20:37:32.348177 32984 out.go:129] ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory I0324 20:37:32.348981 32984 out.go:129] ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory I0324 20:37:32.349672 32984 out.go:129] ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0324 20:37:32.350458 32984 out.go:129] I0324 20:33:07.380832 913 exec_runner.go:98] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0324 20:37:32.351373 32984 out.go:129] I0324 20:33:05.261537 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.352288 32984 out.go:129] W0324 20:33:05.942447 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.353115 32984 out.go:129] stdout: I0324 20:37:32.353984 32984 out.go:129] I0324 20:37:32.354935 32984 out.go:129] stderr: I0324 20:37:32.355917 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.356914 32984 out.go:129] I0324 20:33:05.942479 8815 retry.go:31] will retry after 780.162888ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.358321 32984 out.go:129] stdout: I0324 20:37:32.359900 32984 out.go:129] I0324 20:37:32.361296 32984 out.go:129] stderr: I0324 20:37:32.362401 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.363393 32984 out.go:129] I0324 20:33:06.722912 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.364143 32984 out.go:129] W0324 20:33:07.043311 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.364864 32984 out.go:129] stdout: I0324 20:37:32.365859 32984 out.go:129] I0324 20:37:32.366962 32984 out.go:129] stderr: I0324 20:37:32.367781 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.368612 32984 out.go:129] I0324 20:33:07.043326 8815 retry.go:31] will retry after 1.502072952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.369569 32984 out.go:129] stdout: I0324 20:37:32.370379 32984 out.go:129] I0324 20:37:32.371165 32984 out.go:129] stderr: I0324 20:37:32.371975 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.372740 32984 out.go:129] I0324 20:33:08.546323 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.373476 32984 out.go:129] W0324 20:33:08.849127 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.374351 32984 out.go:129] stdout: I0324 20:37:32.375076 32984 out.go:129] I0324 20:37:32.375882 32984 out.go:129] stderr: I0324 20:37:32.376645 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.377425 32984 out.go:129] I0324 20:33:08.849141 8815 retry.go:31] will retry after 1.073826528s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.378135 32984 out.go:129] stdout: I0324 20:37:32.378927 32984 out.go:129] I0324 20:37:32.379816 32984 out.go:129] stderr: I0324 20:37:32.380534 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.381219 32984 out.go:129] I0324 20:33:09.923898 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.382066 32984 out.go:129] I0324 20:33:11.121271 913 out.go:150] - Generating certificates and keys ... I0324 20:37:32.382799 32984 out.go:129] I0324 20:33:13.703619 913 out.go:150] - Booting up control plane ... I0324 20:37:32.383537 32984 out.go:129] W0324 20:33:10.566387 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.384310 32984 out.go:129] stdout: I0324 20:37:32.385058 32984 out.go:129] I0324 20:37:32.385810 32984 out.go:129] stderr: I0324 20:37:32.386639 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.387611 32984 out.go:129] I0324 20:33:10.566405 8815 retry.go:31] will retry after 1.869541159s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.388472 32984 out.go:129] stdout: I0324 20:37:32.389348 32984 out.go:129] I0324 20:37:32.390602 32984 out.go:129] stderr: I0324 20:37:32.391399 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.392082 32984 out.go:129] I0324 20:33:12.436146 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.392801 32984 out.go:129] I0324 20:33:13.756926 8815 exec_runner.go:85] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.320750818s) I0324 20:37:32.393562 32984 out.go:129] W0324 20:33:13.756944 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.394311 32984 out.go:129] stdout: I0324 20:37:32.395075 32984 out.go:129] I0324 20:37:32.395867 32984 out.go:129] stderr: I0324 20:37:32.396736 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.397743 32984 out.go:129] I0324 20:33:13.756952 8815 retry.go:31] will retry after 2.549945972s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.398549 32984 out.go:129] stdout: I0324 20:37:32.399340 32984 out.go:129] I0324 20:37:32.400006 32984 out.go:129] stderr: I0324 20:37:32.400678 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.401450 32984 out.go:129] I0324 20:33:16.307354 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.402155 32984 out.go:129] W0324 20:33:16.387910 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.402922 32984 out.go:129] stdout: I0324 20:37:32.403614 32984 out.go:129] I0324 20:37:32.404311 32984 out.go:129] stderr: I0324 20:37:32.404983 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.405716 32984 out.go:129] I0324 20:33:16.387929 8815 retry.go:31] will retry after 5.131623747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.406640 32984 out.go:129] stdout: I0324 20:37:32.407359 32984 out.go:129] I0324 20:37:32.408087 32984 out.go:129] stderr: I0324 20:37:32.408867 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.409667 32984 out.go:129] I0324 20:33:21.519990 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.410578 32984 out.go:129] W0324 20:33:21.602237 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.411292 32984 out.go:129] stdout: I0324 20:37:32.412111 32984 out.go:129] I0324 20:37:32.412889 32984 out.go:129] stderr: I0324 20:37:32.413671 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.414474 32984 out.go:129] I0324 20:33:21.602249 8815 retry.go:31] will retry after 9.757045979s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.415211 32984 out.go:129] stdout: I0324 20:37:32.416009 32984 out.go:129] I0324 20:37:32.416902 32984 out.go:129] stderr: I0324 20:37:32.417918 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.419100 32984 out.go:129] I0324 20:33:31.359930 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.420288 32984 out.go:129] I0324 20:33:44.782154 8815 exec_runner.go:85] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.151453695s) I0324 20:37:32.421549 32984 out.go:129] W0324 20:33:44.782173 8815 addons.go:274] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.423175 32984 out.go:129] stdout: I0324 20:37:32.424332 32984 out.go:129] I0324 20:37:32.425373 32984 out.go:129] stderr: I0324 20:37:32.426352 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.427234 32984 out.go:129] I0324 20:33:44.782182 8815 retry.go:31] will retry after 18.937774914s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: exit status 1 I0324 20:37:32.428226 32984 out.go:129] stdout: I0324 20:37:32.429027 32984 out.go:129] I0324 20:37:32.429804 32984 out.go:129] stderr: I0324 20:37:32.430738 32984 out.go:129] The connection to the server localhost:8443 was refused - did you specify the right host or port? I0324 20:37:32.431502 32984 out.go:129] I0324 20:34:01.296836 913 out.go:150] - Configuring RBAC rules ... I0324 20:37:32.432339 32984 out.go:129] I0324 20:34:03.722536 8815 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.433305 32984 out.go:129] I0324 20:34:33.445425 8815 exec_runner.go:85] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (29.722860152s) I0324 20:37:32.434338 32984 out.go:129] W0324 20:34:09.323524 913 out.go:191] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem": exit status 1 I0324 20:37:32.435523 32984 out.go:129] stdout: I0324 20:37:32.436575 32984 out.go:129] [init] Using Kubernetes version: v1.20.2 I0324 20:37:32.437379 32984 out.go:129] [preflight] Running pre-flight checks I0324 20:37:32.438325 32984 out.go:129] [preflight] Pulling images required for setting up a Kubernetes cluster I0324 20:37:32.439118 32984 out.go:129] [preflight] This might take a minute or two, depending on the speed of your internet connection I0324 20:37:32.439955 32984 out.go:129] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' I0324 20:37:32.440809 32984 out.go:129] [certs] Using certificateDir folder "/var/lib/minikube/certs" I0324 20:37:32.441630 32984 out.go:129] [certs] Using existing ca certificate authority I0324 20:37:32.442495 32984 out.go:129] [certs] Using existing apiserver certificate and key on disk I0324 20:37:32.443280 32984 out.go:129] [certs] Using existing apiserver-kubelet-client certificate and key on disk I0324 20:37:32.444168 32984 out.go:129] [certs] Using existing front-proxy-ca certificate authority I0324 20:37:32.445113 32984 out.go:129] [certs] Using existing front-proxy-client certificate and key on disk I0324 20:37:32.446070 32984 out.go:129] [certs] Using existing etcd/ca certificate authority I0324 20:37:32.447248 32984 out.go:129] [certs] Using existing etcd/server certificate and key on disk I0324 20:37:32.448333 32984 out.go:129] [certs] Using existing etcd/peer certificate and key on disk I0324 20:37:32.449657 32984 out.go:129] [certs] Using existing etcd/healthcheck-client certificate and key on disk I0324 20:37:32.450980 32984 out.go:129] [certs] Using existing apiserver-etcd-client certificate and key on disk I0324 20:37:32.451971 32984 out.go:129] [certs] Using the existing "sa" key I0324 20:37:32.452943 32984 out.go:129] [kubeconfig] Using kubeconfig folder "/etc/kubernetes" I0324 20:37:32.454262 32984 out.go:129] [kubeconfig] Writing "admin.conf" kubeconfig file I0324 20:37:32.455466 32984 out.go:129] [kubeconfig] Writing "kubelet.conf" kubeconfig file I0324 20:37:32.456467 32984 out.go:129] [kubeconfig] Writing "controller-manager.conf" kubeconfig file I0324 20:37:32.457502 32984 out.go:129] [kubeconfig] Writing "scheduler.conf" kubeconfig file I0324 20:37:32.458352 32984 out.go:129] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" I0324 20:37:32.461516 32984 out.go:129] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" I0324 20:37:32.462483 32984 out.go:129] [kubelet-start] Starting the kubelet I0324 20:37:32.463535 32984 out.go:129] [control-plane] Using manifest folder "/etc/kubernetes/manifests" I0324 20:37:32.464581 32984 out.go:129] [control-plane] Creating static Pod manifest for "kube-apiserver" I0324 20:37:32.465661 32984 out.go:129] [control-plane] Creating static Pod manifest for "kube-controller-manager" I0324 20:37:32.466755 32984 out.go:129] [control-plane] Creating static Pod manifest for "kube-scheduler" I0324 20:37:32.467773 32984 out.go:129] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" I0324 20:37:32.468754 32984 out.go:129] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s I0324 20:37:32.469683 32984 out.go:129] [kubelet-check] Initial timeout of 40s passed. I0324 20:37:32.470656 32984 out.go:129] [apiclient] All control plane components are healthy after 46.504950 seconds I0324 20:37:32.471942 32984 out.go:129] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace I0324 20:37:32.472779 32984 out.go:129] [kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster I0324 20:37:32.473876 32984 out.go:129] [upload-certs] Skipping phase. Please see --upload-certs I0324 20:37:32.474768 32984 out.go:129] [mark-control-plane] Marking the node ubuntu as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" I0324 20:37:32.475581 32984 out.go:129] [bootstrap-token] Using token: 9f1lx6.uddcz62vmhbcndfh I0324 20:37:32.476402 32984 out.go:129] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles I0324 20:37:32.477178 32984 out.go:129] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes I0324 20:37:32.477940 32984 out.go:129] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials I0324 20:37:32.478905 32984 out.go:129] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token I0324 20:37:32.479736 32984 out.go:129] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster I0324 20:37:32.480721 32984 out.go:129] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace I0324 20:37:32.481595 32984 out.go:129] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key I0324 20:37:32.482580 32984 out.go:129] [addons] Applied essential addon: CoreDNS I0324 20:37:32.483684 32984 out.go:129] I0324 20:37:32.484944 32984 out.go:129] stderr: I0324 20:37:32.486391 32984 out.go:129] [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ I0324 20:37:32.487776 32984 out.go:129] [WARNING Swap]: running with swap on is not supported. Please disable swap I0324 20:37:32.488913 32984 out.go:129] [WARNING FileExisting-ethtool]: ethtool not found in system path I0324 20:37:32.490071 32984 out.go:129] [WARNING FileExisting-socat]: socat not found in system path I0324 20:37:32.491275 32984 out.go:129] [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.5. Latest validated version: 19.03 I0324 20:37:32.492323 32984 out.go:129] [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service' I0324 20:37:32.493327 32984 out.go:129] error execution phase addon/kube-proxy: unable to create daemonset: etcdserver: request timed out I0324 20:37:32.494241 32984 out.go:129] To see the stack trace of this error execute with --v=5 or higher I0324 20:37:32.494991 32984 out.go:129] I0324 20:37:32.495937 32984 out.go:129] I0324 20:34:33.293178 913 exec_runner.go:52] Run: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force" I0324 20:37:32.496764 32984 out.go:129] I0324 20:34:33.446850 8815 out.go:129] 🌟 Enabled addons: storage-provisioner, metallb, default-storageclass I0324 20:37:32.497626 32984 out.go:129] I0324 20:34:33.446901 8815 addons.go:383] enableAddons completed in 1m34.31704554s I0324 20:37:32.498766 32984 out.go:129] I0324 20:34:34.752970 8815 start.go:460] kubectl: 1.20.4, cluster: 1.20.2 (minor skew: 0) I0324 20:37:32.500293 32984 out.go:129] I0324 20:34:34.754337 8815 out.go:129] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default I0324 20:37:32.501694 32984 out.go:129] I0324 20:34:44.637262 913 exec_runner.go:85] Completed: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm reset --cri-socket /var/run/dockershim.sock --force": (11.344052175s) I0324 20:37:32.503402 32984 out.go:129] I0324 20:34:44.637307 913 exec_runner.go:52] Run: sudo systemctl stop -f kubelet I0324 20:37:32.505112 32984 out.go:129] I0324 20:34:44.652604 913 exec_runner.go:52] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format= I0324 20:37:32.508057 32984 out.go:129] I0324 20:34:44.710611 913 exec_runner.go:52] Run: docker version --format I0324 20:37:32.509092 32984 out.go:129] I0324 20:34:44.786650 913 exec_runner.go:52] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0324 20:37:32.510037 32984 out.go:129] I0324 20:34:44.795067 913 kubeadm.go:150] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: exit status 2 I0324 20:37:32.510929 32984 out.go:129] stdout: I0324 20:37:32.511869 32984 out.go:129] I0324 20:37:32.512636 32984 out.go:129] stderr: I0324 20:37:32.518832 32984 out.go:129] ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory I0324 20:37:32.522896 32984 out.go:129] ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory I0324 20:37:32.524298 32984 out.go:129] ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory I0324 20:37:32.525469 32984 out.go:129] ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0324 20:37:32.526670 32984 out.go:129] I0324 20:34:44.795096 913 exec_runner.go:98] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem" I0324 20:37:32.529036 32984 out.go:129] I0324 20:34:45.959545 913 out.go:150] - Generating certificates and keys ... I0324 20:37:32.530177 32984 out.go:129] I0324 20:34:47.478341 913 out.go:150] - Booting up control plane ... I0324 20:37:32.531607 32984 out.go:129] I0324 20:35:32.549554 913 out.go:150] - Configuring RBAC rules ... I0324 20:37:32.532898 32984 out.go:129] I0324 20:35:33.072036 913 cni.go:74] Creating CNI manager for "" I0324 20:37:32.534088 32984 out.go:129] I0324 20:35:33.072049 913 cni.go:129] Driver none used, CNI unnecessary in this configuration, recommending no CNI I0324 20:37:32.535320 32984 out.go:129] I0324 20:35:33.072079 913 exec_runner.go:52] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0324 20:37:32.536371 32984 out.go:129] I0324 20:35:33.072231 913 exec_runner.go:52] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0324 20:37:32.537287 32984 out.go:129] I0324 20:35:33.072323 913 exec_runner.go:52] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.18.1 minikube.k8s.io/commit=09ee84d530de4a92f00f1c5dbc34cead092b95bc minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2021_03_24T20_35_33_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0324 20:37:32.538404 32984 out.go:129] I0324 20:35:33.189876 913 kubeadm.go:995] duration metric: took 117.678727ms to wait for elevateKubeSystemPrivileges. I0324 20:37:32.539283 32984 out.go:129] I0324 20:35:33.283297 913 ops.go:34] apiserver oom_adj: -16 I0324 20:37:32.540217 32984 out.go:129] I0324 20:35:33.283344 913 kubeadm.go:387] StartCluster complete in 6m23.769524346s I0324 20:37:32.541619 32984 out.go:129] I0324 20:35:33.283363 913 settings.go:142] acquiring lock: {Name:mk19004591210340446308469f521c5cfa3e1599 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0324 20:37:32.542605 32984 out.go:129] I0324 20:35:33.283589 913 settings.go:150] Updating kubeconfig: /root/.kube/config I0324 20:37:32.543552 32984 out.go:129] I0324 20:35:33.284811 913 lock.go:36] WriteFile acquiring /root/.kube/config: {Name:mk72a1487fd2da23da9e8181e16f352a6105bd56 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0324 20:37:32.544525 32984 out.go:129] I0324 20:35:33.286505 913 out.go:129] * Configuring local host environment ... I0324 20:37:32.545356 32984 out.go:129] I0324 20:35:33.285389 913 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl scale deployment --replicas=1 coredns -n=kube-system I0324 20:37:32.546259 32984 out.go:129] W0324 20:35:33.286583 913 out.go:191] * I0324 20:37:32.547112 32984 out.go:129] W0324 20:35:33.286617 913 out.go:191] ! The 'none' driver is designed for experts who need to integrate with an existing VM I0324 20:37:32.547974 32984 out.go:129] W0324 20:35:33.286644 913 out.go:191] * Most users should use the newer 'docker' driver instead, which does not require root! I0324 20:37:32.548808 32984 out.go:129] W0324 20:35:33.286672 913 out.go:191] * For more information, see: https://minikube.sigs.k8s.io/docs/reference/drivers/none/ I0324 20:37:32.549561 32984 out.go:129] W0324 20:35:33.286703 913 out.go:191] * I0324 20:37:32.550399 32984 out.go:129] W0324 20:35:33.286782 913 out.go:191] ! kubectl and minikube configuration will be stored in /root I0324 20:37:32.551258 32984 out.go:129] W0324 20:35:33.286816 913 out.go:191] ! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run: I0324 20:37:32.552133 32984 out.go:129] W0324 20:35:33.286841 913 out.go:191] * I0324 20:37:32.552959 32984 out.go:129] W0324 20:35:33.286914 913 out.go:191] - sudo mv /root/.kube /root/.minikube $HOME I0324 20:37:32.553736 32984 out.go:129] W0324 20:35:33.286946 913 out.go:191] - sudo chown -R $USER $HOME/.kube $HOME/.minikube I0324 20:37:32.554863 32984 out.go:129] W0324 20:35:33.286971 913 out.go:191] * I0324 20:37:32.556367 32984 out.go:129] W0324 20:35:33.287031 913 out.go:191] * This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true I0324 20:37:32.557513 32984 out.go:129] I0324 20:35:33.287053 913 start.go:202] Will wait 6m0s for node up to I0324 20:37:32.558848 32984 out.go:129] I0324 20:35:33.285388 913 addons.go:381] enableAddons start: toEnable=map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:true metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[] I0324 20:37:32.559938 32984 out.go:129] I0324 20:35:33.287875 913 out.go:129] * Verifying Kubernetes components... I0324 20:37:32.561356 32984 out.go:129] I0324 20:35:33.287920 913 addons.go:58] Setting default-storageclass=true in profile "minikube" I0324 20:37:32.562902 32984 out.go:129] I0324 20:35:33.287927 913 addons.go:58] Setting storage-provisioner=true in profile "minikube" I0324 20:37:32.563924 32984 out.go:129] I0324 20:35:33.287938 913 addons.go:134] Setting addon storage-provisioner=true in "minikube" I0324 20:37:32.565101 32984 out.go:129] W0324 20:35:33.287943 913 addons.go:143] addon storage-provisioner should already be in state true I0324 20:37:32.566380 32984 out.go:129] I0324 20:35:33.287948 913 addons.go:284] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0324 20:37:32.567545 32984 out.go:129] I0324 20:35:33.287953 913 host.go:66] Checking if "minikube" exists ... I0324 20:37:32.568722 32984 out.go:129] I0324 20:35:33.287965 913 exec_runner.go:52] Run: sudo systemctl is-active --quiet service kubelet I0324 20:37:32.569894 32984 out.go:129] I0324 20:35:33.288039 913 addons.go:58] Setting metallb=true in profile "minikube" I0324 20:37:32.571014 32984 out.go:129] I0324 20:35:33.288047 913 addons.go:134] Setting addon metallb=true in "minikube" I0324 20:37:32.572018 32984 out.go:129] W0324 20:35:33.288052 913 addons.go:143] addon metallb should already be in state true I0324 20:37:32.572923 32984 out.go:129] I0324 20:35:33.288059 913 host.go:66] Checking if "minikube" exists ... I0324 20:37:32.574056 32984 out.go:129] I0324 20:35:33.288571 913 kubeconfig.go:93] found "minikube" server: "https://192.168.1.6:8443" I0324 20:37:32.575018 32984 out.go:129] I0324 20:35:33.288580 913 api_server.go:146] Checking apiserver status ... I0324 20:37:32.575947 32984 out.go:129] I0324 20:35:33.288633 913 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.576933 32984 out.go:129] I0324 20:35:33.288772 913 kubeconfig.go:93] found "minikube" server: "https://192.168.1.6:8443" I0324 20:37:32.578074 32984 out.go:129] I0324 20:35:33.288780 913 api_server.go:146] Checking apiserver status ... I0324 20:37:32.579155 32984 out.go:129] I0324 20:35:33.288834 913 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.579960 32984 out.go:129] I0324 20:35:33.288881 913 kubeconfig.go:93] found "minikube" server: "https://192.168.1.6:8443" I0324 20:37:32.580789 32984 out.go:129] I0324 20:35:33.288889 913 api_server.go:146] Checking apiserver status ... I0324 20:37:32.581598 32984 out.go:129] I0324 20:35:33.288913 913 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.582581 32984 out.go:129] I0324 20:35:33.304356 913 api_server.go:48] waiting for apiserver process to appear ... I0324 20:37:32.583383 32984 out.go:129] I0324 20:35:33.304466 913 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.584119 32984 out.go:129] I0324 20:35:33.483787 913 start.go:601] successfully scaled coredns replicas to 1 I0324 20:37:32.584913 32984 out.go:129] I0324 20:35:33.908790 913 exec_runner.go:52] Run: sudo egrep ^[0-9]+:freezer: /proc/31093/cgroup I0324 20:37:32.585665 32984 out.go:129] I0324 20:35:33.913506 913 exec_runner.go:52] Run: sudo egrep ^[0-9]+:freezer: /proc/31093/cgroup I0324 20:37:32.586413 32984 out.go:129] I0324 20:35:33.920077 913 api_server.go:162] apiserver freezer: "2:freezer:/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205" I0324 20:37:32.587153 32984 out.go:129] I0324 20:35:33.920147 913 exec_runner.go:52] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205/freezer.state I0324 20:37:32.587991 32984 out.go:129] I0324 20:35:33.924346 913 api_server.go:162] apiserver freezer: "2:freezer:/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205" I0324 20:37:32.588692 32984 out.go:129] I0324 20:35:33.924432 913 exec_runner.go:52] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205/freezer.state I0324 20:37:32.589404 32984 out.go:129] I0324 20:35:33.933888 913 api_server.go:184] freezer state: "THAWED" I0324 20:37:32.590083 32984 out.go:129] I0324 20:35:33.933932 913 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.590955 32984 out.go:129] I0324 20:35:33.935408 913 api_server.go:184] freezer state: "THAWED" I0324 20:37:32.591731 32984 out.go:129] I0324 20:35:33.935429 913 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.592501 32984 out.go:129] I0324 20:35:33.937225 913 api_server.go:68] duration metric: took 650.157299ms to wait for apiserver process to appear ... I0324 20:37:32.593179 32984 out.go:129] I0324 20:35:33.937237 913 api_server.go:84] waiting for apiserver healthz status ... I0324 20:37:32.593921 32984 out.go:129] I0324 20:35:33.937244 913 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.594806 32984 out.go:129] I0324 20:35:33.946107 913 exec_runner.go:52] Run: sudo egrep ^[0-9]+:freezer: /proc/31093/cgroup I0324 20:37:32.595896 32984 out.go:129] I0324 20:35:33.948604 913 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.596921 32984 out.go:129] ok I0324 20:37:32.597980 32984 out.go:129] I0324 20:35:33.948655 913 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.598827 32984 out.go:129] ok I0324 20:37:32.599698 32984 out.go:129] I0324 20:35:33.948735 913 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.600470 32984 out.go:129] ok I0324 20:37:32.601329 32984 out.go:129] I0324 20:35:33.950174 913 out.go:129] - Using image gcr.io/k8s-minikube/storage-provisioner:v4 I0324 20:37:32.602257 32984 out.go:129] I0324 20:35:33.951106 913 out.go:129] - Using image metallb/speaker:v0.8.2 I0324 20:37:32.603211 32984 out.go:129] I0324 20:35:33.950408 913 addons.go:253] installing /etc/kubernetes/addons/storage-provisioner.yaml I0324 20:37:32.604113 32984 out.go:129] I0324 20:35:33.952246 913 exec_runner.go:145] found /etc/kubernetes/addons/storage-provisioner.yaml, removing ... I0324 20:37:32.604974 32984 out.go:129] I0324 20:35:33.952254 913 exec_runner.go:190] rm: /etc/kubernetes/addons/storage-provisioner.yaml I0324 20:37:32.605751 32984 out.go:129] I0324 20:35:33.952311 913 exec_runner.go:152] cp: memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0324 20:37:32.606527 32984 out.go:129] I0324 20:35:33.952562 913 exec_runner.go:52] Run: sudo cp -a /tmp/minikube882351825 /etc/kubernetes/addons/storage-provisioner.yaml I0324 20:37:32.607334 32984 out.go:129] I0324 20:35:33.952735 913 out.go:129] - Using image metallb/controller:v0.8.2 I0324 20:37:32.608070 32984 out.go:129] I0324 20:35:33.952930 913 addons.go:253] installing /etc/kubernetes/addons/metallb.yaml I0324 20:37:32.608879 32984 out.go:129] I0324 20:35:33.952953 913 exec_runner.go:145] found /etc/kubernetes/addons/metallb.yaml, removing ... I0324 20:37:32.609669 32984 out.go:129] I0324 20:35:33.952961 913 exec_runner.go:190] rm: /etc/kubernetes/addons/metallb.yaml I0324 20:37:32.610485 32984 out.go:129] I0324 20:35:33.953031 913 exec_runner.go:152] cp: memory --> /etc/kubernetes/addons/metallb.yaml (5606 bytes) I0324 20:37:32.611188 32984 out.go:129] I0324 20:35:33.953252 913 exec_runner.go:52] Run: sudo cp -a /tmp/minikube927807996 /etc/kubernetes/addons/metallb.yaml I0324 20:37:32.611904 32984 out.go:129] I0324 20:35:33.956271 913 api_server.go:137] control plane version: v1.20.2 I0324 20:37:32.612659 32984 out.go:129] I0324 20:35:33.956281 913 api_server.go:127] duration metric: took 19.040856ms to wait for apiserver health ... I0324 20:37:32.613370 32984 out.go:129] I0324 20:35:33.956287 913 system_pods.go:41] waiting for kube-system pods to appear ... I0324 20:37:32.614112 32984 out.go:129] I0324 20:35:33.956815 913 api_server.go:162] apiserver freezer: "2:freezer:/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205" I0324 20:37:32.614883 32984 out.go:129] I0324 20:35:33.956902 913 exec_runner.go:52] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205/freezer.state I0324 20:37:32.618789 32984 out.go:129] I0324 20:35:33.958823 913 system_pods.go:57] 0 kube-system pods found I0324 20:37:32.619534 32984 out.go:129] I0324 20:35:33.958838 913 retry.go:31] will retry after 274.589896ms: only 0 pod(s) have shown up I0324 20:37:32.620291 32984 out.go:129] I0324 20:35:33.961495 913 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0324 20:37:32.621019 32984 out.go:129] I0324 20:35:33.961745 913 addons.go:253] installing /etc/kubernetes/addons/metallb-config.yaml I0324 20:37:32.621753 32984 out.go:129] I0324 20:35:33.961759 913 exec_runner.go:145] found /etc/kubernetes/addons/metallb-config.yaml, removing ... I0324 20:37:32.622519 32984 out.go:129] I0324 20:35:33.961763 913 exec_runner.go:190] rm: /etc/kubernetes/addons/metallb-config.yaml I0324 20:37:32.623217 32984 out.go:129] I0324 20:35:33.961831 913 exec_runner.go:152] cp: memory --> /etc/kubernetes/addons/metallb-config.yaml (217 bytes) I0324 20:37:32.623850 32984 out.go:129] I0324 20:35:33.962008 913 exec_runner.go:52] Run: sudo cp -a /tmp/minikube421988139 /etc/kubernetes/addons/metallb-config.yaml I0324 20:37:32.625990 32984 out.go:129] I0324 20:35:33.965541 913 api_server.go:184] freezer state: "THAWED" I0324 20:37:32.626717 32984 out.go:129] I0324 20:35:33.965579 913 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.627386 32984 out.go:129] I0324 20:35:33.970461 913 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/metallb.yaml -f /etc/kubernetes/addons/metallb-config.yaml I0324 20:37:32.628012 32984 out.go:129] I0324 20:35:33.971805 913 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.628671 32984 out.go:129] ok I0324 20:37:32.629371 32984 out.go:129] I0324 20:35:33.975925 913 addons.go:134] Setting addon default-storageclass=true in "minikube" I0324 20:37:32.630027 32984 out.go:129] W0324 20:35:33.975935 913 addons.go:143] addon default-storageclass should already be in state true I0324 20:37:32.630720 32984 out.go:129] I0324 20:35:33.975947 913 host.go:66] Checking if "minikube" exists ... I0324 20:37:32.632008 32984 out.go:129] I0324 20:35:33.976782 913 kubeconfig.go:93] found "minikube" server: "https://192.168.1.6:8443" I0324 20:37:32.632819 32984 out.go:129] I0324 20:35:33.976800 913 api_server.go:146] Checking apiserver status ... I0324 20:37:32.633524 32984 out.go:129] I0324 20:35:33.976855 913 exec_runner.go:52] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0324 20:37:32.634173 32984 out.go:129] I0324 20:35:34.150603 913 exec_runner.go:52] Run: sudo egrep ^[0-9]+:freezer: /proc/31093/cgroup I0324 20:37:32.634987 32984 out.go:129] I0324 20:35:34.162310 913 api_server.go:162] apiserver freezer: "2:freezer:/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205" I0324 20:37:32.635978 32984 out.go:129] I0324 20:35:34.162354 913 exec_runner.go:52] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podb99608f5ec34b1ffda48abcbb9412760/ee9219711a8d35ecf8b37f3ed2557ff65a39f90086b2ab0eddd3c35b2a7dd205/freezer.state I0324 20:37:32.636968 32984 out.go:129] I0324 20:35:34.169696 913 api_server.go:184] freezer state: "THAWED" I0324 20:37:32.637929 32984 out.go:129] I0324 20:35:34.169714 913 api_server.go:221] Checking apiserver healthz at https://192.168.1.6:8443/healthz ... I0324 20:37:32.638761 32984 out.go:129] I0324 20:35:34.176232 913 api_server.go:241] https://192.168.1.6:8443/healthz returned 200: I0324 20:37:32.639572 32984 out.go:129] ok I0324 20:37:32.640793 32984 out.go:129] I0324 20:35:34.176289 913 addons.go:253] installing /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.641498 32984 out.go:129] I0324 20:35:34.176310 913 exec_runner.go:145] found /etc/kubernetes/addons/storageclass.yaml, removing ... I0324 20:37:32.642216 32984 out.go:129] I0324 20:35:34.176315 913 exec_runner.go:190] rm: /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.643001 32984 out.go:129] I0324 20:35:34.176383 913 exec_runner.go:152] cp: memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0324 20:37:32.643699 32984 out.go:129] I0324 20:35:34.176578 913 exec_runner.go:52] Run: sudo cp -a /tmp/minikube818984078 /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.644378 32984 out.go:129] I0324 20:35:34.186482 913 exec_runner.go:52] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0324 20:37:32.645272 32984 out.go:129] I0324 20:35:34.237824 913 system_pods.go:57] 0 kube-system pods found I0324 20:37:32.646460 32984 out.go:129] I0324 20:35:34.237837 913 retry.go:31] will retry after 316.221923ms: only 0 pod(s) have shown up I0324 20:37:32.647329 32984 out.go:129] I0324 20:35:34.450131 913 out.go:129] * Enabled addons: storage-provisioner, metallb, default-storageclass I0324 20:37:32.648057 32984 out.go:129] I0324 20:35:34.450174 913 addons.go:383] enableAddons completed in 1.164792532s I0324 20:37:32.648709 32984 out.go:129] I0324 20:35:34.557548 913 system_pods.go:57] 1 kube-system pods found I0324 20:37:32.649487 32984 out.go:129] I0324 20:35:34.557565 913 system_pods.go:59] "storage-provisioner" [b1a31819-050a-4a35-a815-91b404320d39] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0324 20:37:32.650313 32984 out.go:129] I0324 20:35:34.557573 913 retry.go:31] will retry after 298.496695ms: only 1 pod(s) have shown up I0324 20:37:32.651079 32984 out.go:129] I0324 20:35:34.859891 913 system_pods.go:57] 1 kube-system pods found I0324 20:37:32.651765 32984 out.go:129] I0324 20:35:34.859907 913 system_pods.go:59] "storage-provisioner" [b1a31819-050a-4a35-a815-91b404320d39] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0324 20:37:32.652562 32984 out.go:129] I0324 20:35:34.859917 913 retry.go:31] will retry after 404.865302ms: only 1 pod(s) have shown up I0324 20:37:32.653392 32984 out.go:129] I0324 20:35:35.268249 913 system_pods.go:57] 1 kube-system pods found I0324 20:37:32.654133 32984 out.go:129] I0324 20:35:35.268264 913 system_pods.go:59] "storage-provisioner" [b1a31819-050a-4a35-a815-91b404320d39] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0324 20:37:32.654921 32984 out.go:129] I0324 20:35:35.268272 913 retry.go:31] will retry after 643.082714ms: only 1 pod(s) have shown up I0324 20:37:32.655640 32984 out.go:129] I0324 20:35:35.914486 913 system_pods.go:57] 1 kube-system pods found I0324 20:37:32.656307 32984 out.go:129] I0324 20:35:35.914501 913 system_pods.go:59] "storage-provisioner" [b1a31819-050a-4a35-a815-91b404320d39] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0324 20:37:32.656930 32984 out.go:129] I0324 20:35:35.914509 913 retry.go:31] will retry after 944.229743ms: only 1 pod(s) have shown up I0324 20:37:32.657582 32984 out.go:129] I0324 20:35:36.866365 913 system_pods.go:57] 1 kube-system pods found I0324 20:37:32.658303 32984 out.go:129] I0324 20:35:36.866381 913 system_pods.go:59] "storage-provisioner" [b1a31819-050a-4a35-a815-91b404320d39] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0324 20:37:32.659327 32984 out.go:129] I0324 20:35:36.866391 913 retry.go:31] will retry after 753.142176ms: only 1 pod(s) have shown up I0324 20:37:32.660226 32984 out.go:129] I0324 20:35:37.623776 913 system_pods.go:57] 1 kube-system pods found I0324 20:37:32.661091 32984 out.go:129] I0324 20:35:37.623792 913 system_pods.go:59] "storage-provisioner" [b1a31819-050a-4a35-a815-91b404320d39] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0324 20:37:32.661880 32984 out.go:129] I0324 20:35:37.623801 913 retry.go:31] will retry after 1.248603221s: only 1 pod(s) have shown up I0324 20:37:32.662656 32984 out.go:129] I0324 20:35:38.877216 913 system_pods.go:57] 1 kube-system pods found I0324 20:37:32.663398 32984 out.go:129] I0324 20:35:38.877231 913 system_pods.go:59] "storage-provisioner" [b1a31819-050a-4a35-a815-91b404320d39] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0324 20:37:32.664398 32984 out.go:129] I0324 20:35:38.877239 913 retry.go:31] will retry after 1.161635404s: only 1 pod(s) have shown up I0324 20:37:32.665291 32984 out.go:129] I0324 20:35:40.043406 913 system_pods.go:57] 5 kube-system pods found I0324 20:37:32.665981 32984 out.go:129] I0324 20:35:40.043420 913 system_pods.go:59] "etcd-ubuntu" [27cb127d-37c3-4c85-bf34-1bccfabd4c76] Pending I0324 20:37:32.666751 32984 out.go:129] I0324 20:35:40.043423 913 system_pods.go:59] "kube-apiserver-ubuntu" [fdf13190-2985-4692-a589-066e90a53f2d] Pending I0324 20:37:32.667548 32984 out.go:129] I0324 20:35:40.043427 913 system_pods.go:59] "kube-controller-manager-ubuntu" [52b1fce4-dab9-41d6-b244-1104422d181d] Pending I0324 20:37:32.668459 32984 out.go:129] I0324 20:35:40.043429 913 system_pods.go:59] "kube-scheduler-ubuntu" [5a1b0f38-0f1a-4e5a-8e86-66729ef07c29] Pending I0324 20:37:32.669342 32984 out.go:129] I0324 20:35:40.043436 913 system_pods.go:59] "storage-provisioner" [b1a31819-050a-4a35-a815-91b404320d39] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.) I0324 20:37:32.670435 32984 out.go:129] I0324 20:35:40.043440 913 system_pods.go:72] duration metric: took 6.087149469s to wait for pod list to return data ... I0324 20:37:32.671457 32984 out.go:129] I0324 20:35:40.043447 913 kubeadm.go:541] duration metric: took 6.756384406s to wait for : map[apiserver:true system_pods:true] ... I0324 20:37:32.672226 32984 out.go:129] I0324 20:35:40.043458 913 node_conditions.go:101] verifying NodePressure condition ... I0324 20:37:32.673019 32984 out.go:129] I0324 20:35:40.046092 913 node_conditions.go:121] node storage ephemeral capacity is 153250288Ki I0324 20:37:32.673771 32984 out.go:129] I0324 20:35:40.046104 913 node_conditions.go:122] node cpu capacity is 8 I0324 20:37:32.674515 32984 out.go:129] I0324 20:35:40.046114 913 node_conditions.go:104] duration metric: took 2.652223ms to run NodePressure ... I0324 20:37:32.675246 32984 out.go:129] I0324 20:35:40.046121 913 start.go:207] waiting for startup goroutines ... I0324 20:37:32.676075 32984 out.go:129] I0324 20:35:40.159712 913 start.go:460] kubectl: 1.20.4, cluster: 1.20.2 (minor skew: 0) I0324 20:37:32.676878 32984 out.go:129] I0324 20:35:40.161202 913 out.go:129] * Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default