Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VM has 50% resting CPU usage when idle #10644

Closed
robd003 opened this issue Feb 27, 2021 · 12 comments
Closed

VM has 50% resting CPU usage when idle #10644

robd003 opened this issue Feb 27, 2021 · 12 comments
Labels
area/performance Performance related issues co/hyperkit Hyperkit related issues kind/bug Categorizes issue or PR as related to a bug. os/macos priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@robd003
Copy link

robd003 commented Feb 27, 2021

Steps to reproduce the issue:

  1. minikube start
  2. minikube addons metrics-server enable
  3. minikube addons dashboard enable

CPU usage is never less than 50% for the duration that the hyperkit VM runs. Still have the same problem as #3207 however this is on macOS Big Sur 11.2.2 and running minikube v1.17.1

Optional: Full output of minikube logs command:

==> Docker <== -- Logs begin at Fri 2021-02-26 22:41:09 UTC, end at Sat 2021-02-27 03:33:15 UTC. -- Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.680680852Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.680689031Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.681471570Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.681528414Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.681543187Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.681552670Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.885923009Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.885966565Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.885975114Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device" Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.885980099Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device" Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.885985053Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device" Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.885989628Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device" Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.886186366Z" level=info msg="Loading containers: start." Feb 26 22:41:32 minikube dockerd[2198]: time="2021-02-26T22:41:32.986441820Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 26 22:41:33 minikube dockerd[2198]: time="2021-02-26T22:41:33.034148307Z" level=info msg="Loading containers: done." Feb 26 22:41:33 minikube dockerd[2198]: time="2021-02-26T22:41:33.063430301Z" level=info msg="Docker daemon" commit=8891c58 graphdriver(s)=overlay2 version=20.10.2 Feb 26 22:41:33 minikube dockerd[2198]: time="2021-02-26T22:41:33.063498938Z" level=info msg="Daemon has completed initialization" Feb 26 22:41:33 minikube systemd[1]: Started Docker Application Container Engine. Feb 26 22:41:33 minikube dockerd[2198]: time="2021-02-26T22:41:33.100253114Z" level=info msg="API listen on [::]:2376" Feb 26 22:41:33 minikube dockerd[2198]: time="2021-02-26T22:41:33.109916139Z" level=info msg="API listen on /var/run/docker.sock" Feb 26 22:41:33 minikube systemd[1]: /usr/lib/systemd/system/docker.service:11: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Feb 26 22:41:39 minikube systemd[1]: /usr/lib/systemd/system/docker.service:11: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Feb 26 22:41:47 minikube dockerd[2206]: time="2021-02-26T22:41:47.948659512Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fed9c3c519f14e19c1ce86f11a7cca239ac805ca39d599e22f0229778e3bec99 pid=3062 Feb 26 22:41:47 minikube dockerd[2206]: time="2021-02-26T22:41:47.982704813Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/a59c23b0816fe02684d9a3c332bf7f74fdfee0fe171eeb206bd8ae5295daf0ef pid=3093 Feb 26 22:41:48 minikube dockerd[2206]: time="2021-02-26T22:41:47.998615179Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8835e7cadfd9921ff20e53eaa2a0d7fab04c559d43fcdb88a843d0ca63c9d933 pid=3130 Feb 26 22:41:48 minikube dockerd[2206]: time="2021-02-26T22:41:47.996766214Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f24507a6e4e35f64715baf7a8981bc6b52e015251dc995340fe3f91c554f29db pid=3118 Feb 26 22:41:48 minikube dockerd[2206]: time="2021-02-26T22:41:48.803292048Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/bb7f8a1ce0756329b0bf6d716844510c079ac9537447c0ddae284b85f5e26867 pid=3282 Feb 26 22:41:49 minikube dockerd[2206]: time="2021-02-26T22:41:49.052605007Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3b80559f3a2af1b2e0cc56e292623bc8da1e0b7dcc51b44b12aea9b16ca5b7b5 pid=3335 Feb 26 22:41:49 minikube dockerd[2206]: time="2021-02-26T22:41:49.067453600Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/26f46a4adf39cac9471536d5ae2fca08b15cc6aeaed5df6dbd4331c9f177c663 pid=3359 Feb 26 22:41:49 minikube dockerd[2206]: time="2021-02-26T22:41:49.071642133Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7faa73cacecd6775a8d97d8c9a3920f3b80ecd99b1f6f59aad570df58703743b pid=3366 Feb 26 22:41:59 minikube systemd[1]: /usr/lib/systemd/system/docker.service:11: Unknown key name 'StartLimitIntervalSec' in section 'Service', ignoring. Feb 26 22:42:15 minikube dockerd[2206]: time="2021-02-26T22:42:15.692351502Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/654a5901832650ba3b977d70c055955cf291589eac809304221082a158c98368 pid=4125 Feb 26 22:42:15 minikube dockerd[2206]: time="2021-02-26T22:42:15.707966890Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/96ac5540646a905b646885489b047863837ba5c5fe8fb4ff317fa094654b715e pid=4144 Feb 26 22:42:15 minikube dockerd[2206]: time="2021-02-26T22:42:15.743752509Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/32952ae99f4cc921f9db4c7ac555f696c5bb0c4c79e572f7171c77256bbff87e pid=4166 Feb 26 22:42:17 minikube dockerd[2206]: time="2021-02-26T22:42:17.028563185Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/bbf7646ab19375e772c6c7a853de15f6aa542525d6f129f2d842b0ec154b164e pid=4313 Feb 26 22:42:17 minikube dockerd[2206]: time="2021-02-26T22:42:17.049083056Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/c78c7db49647e84baa0547e49d2af68fdcd15f57357003edda5f31622c9d1381 pid=4324 Feb 26 22:42:17 minikube dockerd[2206]: time="2021-02-26T22:42:17.214591293Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/974b439946a72df7458e24d2f60b2abfbe98f6d332b65c1914821649a1bf1955 pid=4369 Feb 26 22:42:48 minikube dockerd[2198]: time="2021-02-26T22:42:48.205994593Z" level=info msg="ignoring event" container=bbf7646ab19375e772c6c7a853de15f6aa542525d6f129f2d842b0ec154b164e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 26 22:42:48 minikube dockerd[2206]: time="2021-02-26T22:42:48.207054090Z" level=info msg="shim disconnected" id=bbf7646ab19375e772c6c7a853de15f6aa542525d6f129f2d842b0ec154b164e Feb 26 22:42:49 minikube dockerd[2206]: time="2021-02-26T22:42:49.376761828Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ee183d4537aeb5a20e1ab2062e34433c5b7778e2c71684c9a2aff0fb7461b40d pid=4638 Feb 26 22:47:44 minikube dockerd[2206]: time="2021-02-26T22:47:44.470187560Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/5883be2330c1ecfeb180c8c3564fd744d33bf420c29bdbc335edcbfe7b0ee81d pid=5857 Feb 26 22:47:52 minikube dockerd[2206]: time="2021-02-26T22:47:52.031644049Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ea5c879f935b07c5ac4cb6292ba67b732f1522020838c9332023f5f6219a2c74 pid=5990 Feb 26 22:48:02 minikube dockerd[2206]: time="2021-02-26T22:48:02.622792539Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6024fa425a380aa784b94e32553a9f3d2c34e621694703cce02c20328e122c6a pid=6199 Feb 26 22:48:02 minikube dockerd[2206]: time="2021-02-26T22:48:02.690175793Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/af90f6708a69d7e9df9651b90d214baca757ac7cd3462713905c9b33372a9bd5 pid=6225 Feb 26 22:48:03 minikube dockerd[2206]: time="2021-02-26T22:48:03.443113262Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/d14ec46fc23b41c79f707763e7adfe5c5bc8f2d4ea66dd85840cb4050a5fb367 pid=6339 Feb 26 22:48:03 minikube dockerd[2206]: time="2021-02-26T22:48:03.605076355Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6eb4726a9091a204913306b351a1381cd58bed5566cfe400cf328fab042162bc pid=6384 Feb 26 22:48:47 minikube dockerd[2206]: time="2021-02-26T22:48:47.627180405Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6161ff40b8a63342f80db6da1ad8333348ce199c0855d08e0e9d707b39b3c4cb pid=6725 Feb 26 22:48:47 minikube dockerd[2206]: time="2021-02-26T22:48:47.673026642Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9a3cae251b04f592bb769fa59ab79bf5b72e0ce9f3f56171aa5c605408c3f007 pid=6751 Feb 26 22:48:52 minikube dockerd[2206]: time="2021-02-26T22:48:52.912972690Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/2fa0d369efb431c488bd4a0875e61d35a54f0945e99cce92725f0ba24d01bc5c pid=6891 Feb 26 22:48:53 minikube dockerd[2198]: time="2021-02-26T22:48:53.086523448Z" level=info msg="ignoring event" container=2fa0d369efb431c488bd4a0875e61d35a54f0945e99cce92725f0ba24d01bc5c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 26 22:48:53 minikube dockerd[2206]: time="2021-02-26T22:48:53.085763375Z" level=info msg="shim disconnected" id=2fa0d369efb431c488bd4a0875e61d35a54f0945e99cce92725f0ba24d01bc5c Feb 26 22:48:53 minikube dockerd[2198]: time="2021-02-26T22:48:53.468648221Z" level=info msg="ignoring event" container=6161ff40b8a63342f80db6da1ad8333348ce199c0855d08e0e9d707b39b3c4cb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 26 22:48:53 minikube dockerd[2206]: time="2021-02-26T22:48:53.472930046Z" level=info msg="shim disconnected" id=6161ff40b8a63342f80db6da1ad8333348ce199c0855d08e0e9d707b39b3c4cb Feb 26 22:48:55 minikube dockerd[2206]: time="2021-02-26T22:48:55.148184288Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/3bb79ac05d6a4d1cd675085e8d18e2491ffb7e39f750491bb737ea69d18fdaea pid=7026 Feb 26 22:48:56 minikube dockerd[2206]: time="2021-02-26T22:48:56.757958595Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/74b5b88cd434777008b2b2213e06e296af625d69403a3432f3a5b9ae9583d623 pid=7123 Feb 26 22:48:56 minikube dockerd[2206]: time="2021-02-26T22:48:56.948321529Z" level=info msg="shim disconnected" id=74b5b88cd434777008b2b2213e06e296af625d69403a3432f3a5b9ae9583d623 Feb 26 22:48:56 minikube dockerd[2198]: time="2021-02-26T22:48:56.948442169Z" level=info msg="ignoring event" container=74b5b88cd434777008b2b2213e06e296af625d69403a3432f3a5b9ae9583d623 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 26 22:48:57 minikube dockerd[2198]: time="2021-02-26T22:48:57.053728889Z" level=info msg="ignoring event" container=9a3cae251b04f592bb769fa59ab79bf5b72e0ce9f3f56171aa5c605408c3f007 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Feb 26 22:48:57 minikube dockerd[2206]: time="2021-02-26T22:48:57.054444317Z" level=info msg="shim disconnected" id=9a3cae251b04f592bb769fa59ab79bf5b72e0ce9f3f56171aa5c605408c3f007 Feb 26 22:49:22 minikube dockerd[2206]: time="2021-02-26T22:49:22.588364693Z" level=info msg="starting signal loop" namespace=moby path=/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/688911657ba5331057eaba0e5202209a2988c815137ac42e8520fcd502743e62 pid=7418

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
688911657ba53 us.gcr.io/k8s-artifacts-prod/ingress-nginx/controller@sha256:46ba23c3fbaafd9e5bd01ea85b2f921d9f2217be082580edc22e6c704a83f02f 5 hours ago Running controller 0 3bb79ac05d6a4
74b5b88cd4347 jettech/kube-webhook-certgen@sha256:ff01fba91131ed260df3f3793009efbf9686f5a5ce78a85f81c386a4403f7689 5 hours ago Exited patch 0 9a3cae251b04f
2fa0d369efb43 jettech/kube-webhook-certgen@sha256:da8122a78d7387909cf34a0f34db0cce672da1379ee4fd57c626a4afe9ac12b7 5 hours ago Exited create 0 6161ff40b8a63
6eb4726a9091a 86262685d9abb 5 hours ago Running dashboard-metrics-scraper 0 af90f6708a69d
d14ec46fc23b4 9a07b5b4bfac0 5 hours ago Running kubernetes-dashboard 0 6024fa425a380
ea5c879f935b0 k8s.gcr.io/metrics-server-amd64@sha256:49a9f12f7067d11f42c803dbe61ed2c1299959ad85cb315b25ff7eef8e6b8892 5 hours ago Running metrics-server 0 5883be2330c1e
ee183d4537aeb 85069258b98ac 5 hours ago Running storage-provisioner 1 96ac5540646a9
974b439946a72 43154ddb57a83 5 hours ago Running kube-proxy 0 654a590183265
bbf7646ab1937 85069258b98ac 5 hours ago Exited storage-provisioner 0 96ac5540646a9
c78c7db49647e bfe3a36ebd252 5 hours ago Running coredns 0 32952ae99f4cc
7faa73cacecd6 a27166429d98e 5 hours ago Running kube-controller-manager 0 a59c23b0816fe
26f46a4adf39c 0369cf4303ffd 5 hours ago Running etcd 0 f24507a6e4e35
3b80559f3a2af a8c2fdb8bf76e 5 hours ago Running kube-apiserver 0 8835e7cadfd99
bb7f8a1ce0756 ed2c44fbdd78b 5 hours ago Running kube-scheduler 0 fed9c3c519f14

==> coredns [c78c7db49647] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = db32ca3650231d74073ff4cf814959a7
CoreDNS-1.7.0
linux/amd64, go1.14.4, f59c03d
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
[INFO] plugin/ready: Still waiting on: "kubernetes"
I0226 22:42:48.122014 1 trace.go:116] Trace[2019727887]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-02-26 22:42:17.507504117 +0000 UTC m=+0.068477772) (total time: 30.614445123s):
Trace[2019727887]: [30.614445123s] [30.614445123s] END
E0226 22:42:48.122041 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0226 22:42:48.122210 1 trace.go:116] Trace[939984059]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-02-26 22:42:17.512910353 +0000 UTC m=+0.073884022) (total time: 30.609291048s):
Trace[939984059]: [30.609291048s] [30.609291048s] END
E0226 22:42:48.122215 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
I0226 22:42:48.122272 1 trace.go:116] Trace[911902081]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125 (started: 2021-02-26 22:42:17.513572821 +0000 UTC m=+0.074546470) (total time: 30.60869266s):
Trace[911902081]: [30.60869266s] [30.60869266s] END
E0226 22:42:48.122277 1 reflector.go:178] pkg/mod/k8s.io/client-go@v0.18.3/tools/cache/reflector.go:125: Failed to list *v1.Endpoints: Get "https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout

==> describe nodes <==
Name: minikube
Roles: control-plane,master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=043bdca07e54ab6e4fc0457e3064048f34133d7e
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2021_02_26T17_41_59_0700
minikube.k8s.io/version=v1.17.1
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Fri, 26 Feb 2021 22:41:56 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Sat, 27 Feb 2021 03:33:12 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Sat, 27 Feb 2021 03:31:31 +0000 Fri, 26 Feb 2021 22:41:50 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 27 Feb 2021 03:31:31 +0000 Fri, 26 Feb 2021 22:41:50 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 27 Feb 2021 03:31:31 +0000 Fri, 26 Feb 2021 22:41:50 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 27 Feb 2021 03:31:31 +0000 Fri, 26 Feb 2021 22:42:10 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 192.168.64.2
Hostname: minikube
Capacity:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3935188Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 17784752Ki
hugepages-2Mi: 0
memory: 3935188Ki
pods: 110
System Info:
Machine ID: 2b4ae339929b456fa708e9dcf45d6170
System UUID: adc511eb-0000-0000-aa8b-acde48001122
Boot ID: c75eaf91-c442-4115-9ac9-5ff4b69d36a1
Kernel Version: 4.19.157
OS Image: Buildroot 2020.02.8
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://20.10.2
Kubelet Version: v1.20.2
Kube-Proxy Version: v1.20.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (11 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


kube-system coredns-74ff55c5b-zj22p 100m (5%) 0 (0%) 70Mi (1%) 170Mi (4%) 4h51m
kube-system etcd-minikube 100m (5%) 0 (0%) 100Mi (2%) 0 (0%) 4h51m
kube-system ingress-nginx-controller-558664778f-5z2fh 100m (5%) 0 (0%) 90Mi (2%) 0 (0%) 4h44m
kube-system kube-apiserver-minikube 250m (12%) 0 (0%) 0 (0%) 0 (0%) 4h51m
kube-system kube-controller-manager-minikube 200m (10%) 0 (0%) 0 (0%) 0 (0%) 4h51m
kube-system kube-proxy-csr7c 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h51m
kube-system kube-scheduler-minikube 100m (5%) 0 (0%) 0 (0%) 0 (0%) 4h51m
kube-system metrics-server-56c4f8c9d6-mxk8h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h45m
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h51m
kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-xbn89 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h45m
kubernetes-dashboard kubernetes-dashboard-6cff4c7c4f-565nn 0 (0%) 0 (0%) 0 (0%) 0 (0%) 4h45m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 850m (42%) 0 (0%)
memory 260Mi (6%) 170Mi (4%)
ephemeral-storage 100Mi (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:

==> dmesg <==
[Feb26 22:40] ERROR: earlyprintk= earlyser already used
[ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.346804] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20180810/tbprint-177)
[Feb26 22:41] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[ +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[ +0.009547] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.895814] systemd-fstab-generator[1113]: Ignoring "noauto" for root device
[ +0.050584] systemd[1]: system-getty.slice: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
[ +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
[ +0.946331] SELinux: unrecognized netlink message: protocol=0 nlmsg_type=106 sclass=netlink_route_socket pid=1627 comm=systemd-network
[ +0.673050] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +0.663732] vboxguest: loading out-of-tree module taints kernel.
[ +0.003513] vboxguest: PCI device not found, probably running on physical hardware.
[ +3.812607] systemd-fstab-generator[1991]: Ignoring "noauto" for root device
[ +0.180430] systemd-fstab-generator[2004]: Ignoring "noauto" for root device
[ +15.969069] systemd-fstab-generator[2187]: Ignoring "noauto" for root device
[ +1.523260] kauditd_printk_skb: 68 callbacks suppressed
[ +0.318006] systemd-fstab-generator[2354]: Ignoring "noauto" for root device
[ +6.678133] systemd-fstab-generator[2600]: Ignoring "noauto" for root device
[ +6.052350] kauditd_printk_skb: 107 callbacks suppressed
[ +12.324464] systemd-fstab-generator[3780]: Ignoring "noauto" for root device
[Feb26 22:42] kauditd_printk_skb: 38 callbacks suppressed
[ +37.012681] kauditd_printk_skb: 47 callbacks suppressed
[Feb26 22:43] NFSD: Unable to end grace period: -110
[ +13.789505] hrtimer: interrupt took 4990114 ns
[Feb26 22:47] kauditd_printk_skb: 5 callbacks suppressed
[Feb26 22:48] kauditd_printk_skb: 20 callbacks suppressed
[ +6.209782] kauditd_printk_skb: 8 callbacks suppressed
[Feb26 22:49] kauditd_printk_skb: 17 callbacks suppressed

==> etcd [26f46a4adf39] <==
2021-02-27 03:24:27.141440 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:24:37.143058 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:24:47.142787 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:24:57.143186 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:25:07.141745 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:25:17.141228 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:25:27.141174 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:25:37.141343 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:25:47.140887 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:25:57.145559 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:26:07.140811 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:26:17.142541 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:26:27.142023 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:26:27.234005 I | etcdserver: start to snapshot (applied: 20002, lastsnap: 10001)
2021-02-27 03:26:27.238816 I | etcdserver: saved snapshot at index 20002
2021-02-27 03:26:27.239493 I | etcdserver: compacted raft log at 15002
2021-02-27 03:26:37.140786 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:26:47.141751 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:26:51.062037 I | mvcc: store.index: compact 14588
2021-02-27 03:26:51.064209 I | mvcc: finished scheduled compaction at 14588 (took 1.203824ms)
2021-02-27 03:26:57.141558 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:27:07.141328 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:27:17.140928 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:27:27.142953 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:27:37.141415 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:27:47.141667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:27:57.141448 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:28:07.141667 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:28:17.140962 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:28:27.141582 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:28:37.141482 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:28:47.144581 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:28:57.141862 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:29:07.140890 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:29:17.142136 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:29:27.142395 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:29:37.141372 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:29:47.142775 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:29:57.141886 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:30:07.140867 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:30:17.141466 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:30:27.140918 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:30:37.141573 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:30:47.142206 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:30:57.141895 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:31:07.141401 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:31:17.143185 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:31:27.141464 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:31:37.140871 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:31:47.140697 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:31:51.074288 I | mvcc: store.index: compact 14838
2021-02-27 03:31:51.076012 I | mvcc: finished scheduled compaction at 14838 (took 1.186073ms)
2021-02-27 03:31:57.141986 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:32:07.140951 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:32:17.144300 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:32:27.142096 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:32:37.141532 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:32:47.141032 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:32:57.140940 I | etcdserver/api/etcdhttp: /health OK (status code 200)
2021-02-27 03:33:07.141084 I | etcdserver/api/etcdhttp: /health OK (status code 200)

==> kernel <==
03:33:17 up 4:52, 0 users, load average: 0.54, 0.41, 0.37
Linux minikube 4.19.157 #1 SMP Wed Jan 20 11:33:19 PST 2021 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2020.02.8"

==> kube-apiserver [3b80559f3a2a] <==
I0227 03:23:53.721100 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:23:53.721125 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0227 03:24:31.577914 1 client.go:360] parsed scheme: "passthrough"
I0227 03:24:31.578052 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:24:31.578075 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0227 03:24:57.844900 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0227 03:24:57.845499 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0227 03:25:03.827119 1 client.go:360] parsed scheme: "passthrough"
I0227 03:25:03.827529 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:25:03.827987 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0227 03:25:39.727441 1 client.go:360] parsed scheme: "passthrough"
I0227 03:25:39.727552 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:25:39.727569 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0227 03:26:20.972857 1 client.go:360] parsed scheme: "passthrough"
I0227 03:26:20.972984 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:26:20.973002 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0227 03:26:57.863313 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0227 03:26:57.863350 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0227 03:27:01.048369 1 client.go:360] parsed scheme: "passthrough"
I0227 03:27:01.049335 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:27:01.049408 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0227 03:27:43.759774 1 client.go:360] parsed scheme: "passthrough"
I0227 03:27:43.759853 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:27:43.759873 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0227 03:27:57.879109 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0227 03:27:57.879153 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0227 03:28:27.615345 1 client.go:360] parsed scheme: "passthrough"
I0227 03:28:27.615749 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:28:27.615972 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0227 03:29:07.265425 1 client.go:360] parsed scheme: "passthrough"
I0227 03:29:07.265478 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:29:07.265489 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0227 03:29:39.303540 1 client.go:360] parsed scheme: "passthrough"
I0227 03:29:39.303658 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:29:39.303677 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0227 03:29:57.904411 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0227 03:29:57.904450 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0227 03:30:21.542768 1 client.go:360] parsed scheme: "passthrough"
I0227 03:30:21.542908 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:30:21.542936 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0227 03:30:56.287190 1 client.go:360] parsed scheme: "passthrough"
I0227 03:30:56.287438 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:30:56.287458 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0227 03:31:24.598323 1 trace.go:205] Trace[2015036167]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,client:192.168.64.2 (27-Feb-2021 03:31:23.996) (total time: 602ms):
Trace[2015036167]: ---"About to write a response" 601ms (03:31:00.598)
Trace[2015036167]: [602.001922ms] [602.001922ms] END
I0227 03:31:34.606749 1 client.go:360] parsed scheme: "passthrough"
I0227 03:31:34.607076 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:31:34.607240 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
W0227 03:31:46.395539 1 watcher.go:220] watch chan error: etcdserver: mvcc: required revision has been compacted
E0227 03:31:57.871938 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0227 03:31:57.872136 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
I0227 03:32:08.772790 1 client.go:360] parsed scheme: "passthrough"
I0227 03:32:08.772861 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:32:08.772871 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0227 03:32:47.052484 1 client.go:360] parsed scheme: "passthrough"
I0227 03:32:47.052570 1 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{https://127.0.0.1:2379 0 }] }
I0227 03:32:47.052590 1 clientconn.go:948] ClientConn switching balancer to "pick_first"
E0227 03:32:57.891079 1 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: OpenAPI spec does not exist
I0227 03:32:57.891120 1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.

==> kube-controller-manager [7faa73cacecd] <==
I0226 22:42:15.070837 1 shared_informer.go:247] Caches are synced for taint
I0226 22:42:15.070991 1 node_lifecycle_controller.go:1429] Initializing eviction metric for zone:
W0226 22:42:15.071050 1 node_lifecycle_controller.go:1044] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0226 22:42:15.071144 1 node_lifecycle_controller.go:1245] Controller detected that zone is now in state Normal.
I0226 22:42:15.071542 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0226 22:42:15.071897 1 event.go:291] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller"
I0226 22:42:15.072322 1 shared_informer.go:247] Caches are synced for TTL
I0226 22:42:15.074478 1 shared_informer.go:247] Caches are synced for endpoint
I0226 22:42:15.088800 1 shared_informer.go:247] Caches are synced for PV protection
I0226 22:42:15.088847 1 shared_informer.go:247] Caches are synced for attach detach
I0226 22:42:15.121299 1 shared_informer.go:247] Caches are synced for expand
E0226 22:42:15.130507 1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
I0226 22:42:15.134979 1 event.go:291] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-74ff55c5b to 1"
I0226 22:42:15.136313 1 event.go:291] "Event occurred" object="kube-system/coredns-74ff55c5b" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-74ff55c5b-zj22p"
I0226 22:42:15.171399 1 shared_informer.go:247] Caches are synced for persistent volume
I0226 22:42:15.173447 1 shared_informer.go:247] Caches are synced for bootstrap_signer
I0226 22:42:15.187952 1 event.go:291] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-csr7c"
I0226 22:42:15.195604 1 shared_informer.go:247] Caches are synced for namespace
I0226 22:42:15.220172 1 shared_informer.go:247] Caches are synced for service account
I0226 22:42:15.222590 1 shared_informer.go:247] Caches are synced for crt configmap
I0226 22:42:15.228747 1 shared_informer.go:247] Caches are synced for resource quota
I0226 22:42:15.241245 1 shared_informer.go:247] Caches are synced for resource quota
E0226 22:42:15.280785 1 daemon_controller.go:320] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"f5dfd9a4-661f-4576-bfe1-c5566f385ae8", ResourceVersion:"256", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63749976119, loc:(*time.Location)(0x6f31360)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00045c540), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00045c560)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00045c580), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc000d8e580), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00045c5c0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00045c5e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil), Ephemeral:(*v1.EphemeralVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.20.2", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00045c620)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000cf0720), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0010222c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000abcf50), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil), SetHostnameAsFQDN:(*bool)(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc0001336a8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001022318)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0226 22:42:15.411531 1 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0226 22:42:15.657038 1 shared_informer.go:247] Caches are synced for garbage collector
I0226 22:42:15.657064 1 garbagecollector.go:151] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0226 22:42:15.711737 1 shared_informer.go:247] Caches are synced for garbage collector
I0226 22:47:43.928377 1 event.go:291] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-56c4f8c9d6 to 1"
I0226 22:47:43.949289 1 event.go:291] "Event occurred" object="kube-system/metrics-server-56c4f8c9d6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-56c4f8c9d6-mxk8h"
E0226 22:47:50.282726 1 resource_quota_controller.go:409] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0226 22:47:51.904402 1 request.go:655] Throttling request took 1.047945691s, request: GET:https://192.168.64.2:8443/apis/extensions/v1beta1?timeout=32s
W0226 22:47:52.755871 1 garbagecollector.go:703] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
I0226 22:48:00.972286 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-c95fcf479 to 1"
I0226 22:48:00.982585 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-6cff4c7c4f to 1"
I0226 22:48:00.991725 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "dashboard-metrics-scraper-c95fcf479-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
I0226 22:48:01.003987 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "kubernetes-dashboard-6cff4c7c4f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
E0226 22:48:01.014954 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" failed with pods "dashboard-metrics-scraper-c95fcf479-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0226 22:48:01.023455 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f" failed with pods "kubernetes-dashboard-6cff4c7c4f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0226 22:48:01.033698 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" failed with pods "dashboard-metrics-scraper-c95fcf479-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0226 22:48:01.034376 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "dashboard-metrics-scraper-c95fcf479-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
E0226 22:48:01.060043 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f" failed with pods "kubernetes-dashboard-6cff4c7c4f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0226 22:48:01.060592 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "kubernetes-dashboard-6cff4c7c4f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
I0226 22:48:01.062286 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "dashboard-metrics-scraper-c95fcf479-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
E0226 22:48:01.060804 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" failed with pods "dashboard-metrics-scraper-c95fcf479-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
E0226 22:48:01.086540 1 replica_set.go:532] sync "kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f" failed with pods "kubernetes-dashboard-6cff4c7c4f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0226 22:48:01.086893 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "kubernetes-dashboard-6cff4c7c4f-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
I0226 22:48:01.087186 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods "dashboard-metrics-scraper-c95fcf479-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found"
E0226 22:48:01.087358 1 replica_set.go:532] sync "kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" failed with pods "dashboard-metrics-scraper-c95fcf479-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
I0226 22:48:02.112639 1 event.go:291] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-6cff4c7c4f-565nn"
I0226 22:48:02.136677 1 event.go:291] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-c95fcf479-xbn89"
I0226 22:48:47.076672 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set ingress-nginx-controller-558664778f to 1"
I0226 22:48:47.092153 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-controller-558664778f" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-controller-558664778f-5z2fh"
I0226 22:48:47.184477 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-create-bz9cm"
I0226 22:48:47.221854 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: ingress-nginx-admission-patch-mwmhl"
I0226 22:48:53.387652 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-create" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0226 22:48:56.992341 1 event.go:291] "Event occurred" object="kube-system/ingress-nginx-admission-patch" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
I0226 23:04:40.321390 1 request.go:655] Throttling request took 1.420381932s, request: GET:https://192.168.64.2:8443/apis/scheduling.k8s.io/v1?timeout=32s
I0227 00:42:10.972045 1 cleaner.go:180] Cleaning CSR "csr-rgql2" as it is more than 1h0m0s old and approved.
I0227 02:33:06.614040 1 request.go:655] Throttling request took 1.128482077s, request: GET:https://192.168.64.2:8443/apis/coordination.k8s.io/v1?timeout=32s
I0227 02:44:20.673805 1 request.go:655] Throttling request took 1.078818303s, request: GET:https://192.168.64.2:8443/apis/scheduling.k8s.io/v1?timeout=32s

==> kube-proxy [974b439946a7] <==
I0226 22:42:17.722513 1 node.go:172] Successfully retrieved node IP: 192.168.64.2
I0226 22:42:17.722616 1 server_others.go:142] kube-proxy node IP is an IPv4 address (192.168.64.2), assume IPv4 operation
W0226 22:42:17.756388 1 server_others.go:578] Unknown proxy mode "", assuming iptables proxy
I0226 22:42:17.756531 1 server_others.go:185] Using iptables Proxier.
I0226 22:42:17.756788 1 server.go:650] Version: v1.20.2
I0226 22:42:17.758361 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0226 22:42:17.758425 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0226 22:42:17.758959 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0226 22:42:17.762559 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0226 22:42:17.762867 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0226 22:42:17.765118 1 config.go:315] Starting service config controller
I0226 22:42:17.767447 1 shared_informer.go:240] Waiting for caches to sync for service config
I0226 22:42:17.767497 1 shared_informer.go:247] Caches are synced for service config
I0226 22:42:17.765366 1 config.go:224] Starting endpoint slice config controller
I0226 22:42:17.767528 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0226 22:42:17.867890 1 shared_informer.go:247] Caches are synced for endpoint slice config

==> kube-scheduler [bb7f8a1ce075] <==
I0226 22:41:52.098574 1 serving.go:331] Generated self-signed cert in-memory
W0226 22:41:56.091030 1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0226 22:41:56.091074 1 authentication.go:332] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0226 22:41:56.091082 1 authentication.go:333] Continuing without authentication configuration. This may treat all requests as anonymous.
W0226 22:41:56.091086 1 authentication.go:334] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0226 22:41:56.218744 1 secure_serving.go:197] Serving securely on 127.0.0.1:10259
I0226 22:41:56.218931 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0226 22:41:56.221863 1 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0226 22:41:56.219004 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0226 22:41:56.260432 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0226 22:41:56.267740 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0226 22:41:56.270669 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0226 22:41:56.271021 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0226 22:41:56.272294 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0226 22:41:56.272544 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0226 22:41:56.272825 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0226 22:41:56.273124 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0226 22:41:56.275528 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0226 22:41:56.275833 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0226 22:41:56.276027 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0226 22:41:56.276172 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0226 22:41:57.082309 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0226 22:41:57.172667 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0226 22:41:57.287943 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0226 22:41:57.322773 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0226 22:41:57.380420 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0226 22:41:57.399631 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
I0226 22:41:57.922531 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file

==> kubelet <==
-- Logs begin at Fri 2021-02-26 22:41:09 UTC, end at Sat 2021-02-27 03:33:18 UTC. --
Feb 26 22:42:49 minikube kubelet[3789]: I0226 22:42:49.172329 3789 scope.go:95] [topologymanager] RemoveContainer - Container ID: bbf7646ab19375e772c6c7a853de15f6aa542525d6f129f2d842b0ec154b164e
Feb 26 22:47:43 minikube kubelet[3789]: I0226 22:47:43.965092 3789 topology_manager.go:187] [topologymanager] Topology Admit Handler
Feb 26 22:47:44 minikube kubelet[3789]: I0226 22:47:44.133795 3789 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-5lcbz" (UniqueName: "kubernetes.io/secret/1f96aec8-239e-48c5-8c7a-4bec60ec0de8-default-token-5lcbz") pod "metrics-server-56c4f8c9d6-mxk8h" (UID: "1f96aec8-239e-48c5-8c7a-4bec60ec0de8")
Feb 26 22:47:45 minikube kubelet[3789]: W0226 22:47:45.089123 3789 pod_container_deletor.go:79] Container "5883be2330c1ecfeb180c8c3564fd744d33bf420c29bdbc335edcbfe7b0ee81d" not found in pod's containers
Feb 26 22:47:45 minikube kubelet[3789]: W0226 22:47:45.100231 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-56c4f8c9d6-mxk8h through plugin: invalid network status for
Feb 26 22:47:46 minikube kubelet[3789]: W0226 22:47:46.108971 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-56c4f8c9d6-mxk8h through plugin: invalid network status for
Feb 26 22:47:52 minikube kubelet[3789]: W0226 22:47:52.166794 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-56c4f8c9d6-mxk8h through plugin: invalid network status for
Feb 26 22:47:53 minikube kubelet[3789]: W0226 22:47:53.202060 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/metrics-server-56c4f8c9d6-mxk8h through plugin: invalid network status for
Feb 26 22:48:02 minikube kubelet[3789]: I0226 22:48:02.131109 3789 topology_manager.go:187] [topologymanager] Topology Admit Handler
Feb 26 22:48:02 minikube kubelet[3789]: I0226 22:48:02.139973 3789 topology_manager.go:187] [topologymanager] Topology Admit Handler
Feb 26 22:48:02 minikube kubelet[3789]: I0226 22:48:02.227142 3789 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/98d56ee8-c914-4938-b668-6a68736ebb27-tmp-volume") pod "dashboard-metrics-scraper-c95fcf479-xbn89" (UID: "98d56ee8-c914-4938-b668-6a68736ebb27")
Feb 26 22:48:02 minikube kubelet[3789]: I0226 22:48:02.227340 3789 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/8fcab4a3-359e-4ad0-af24-5e7721efd26a-tmp-volume") pod "kubernetes-dashboard-6cff4c7c4f-565nn" (UID: "8fcab4a3-359e-4ad0-af24-5e7721efd26a")
Feb 26 22:48:02 minikube kubelet[3789]: I0226 22:48:02.229965 3789 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-2n2fp" (UniqueName: "kubernetes.io/secret/98d56ee8-c914-4938-b668-6a68736ebb27-kubernetes-dashboard-token-2n2fp") pod "dashboard-metrics-scraper-c95fcf479-xbn89" (UID: "98d56ee8-c914-4938-b668-6a68736ebb27")
Feb 26 22:48:02 minikube kubelet[3789]: I0226 22:48:02.230163 3789 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-2n2fp" (UniqueName: "kubernetes.io/secret/8fcab4a3-359e-4ad0-af24-5e7721efd26a-kubernetes-dashboard-token-2n2fp") pod "kubernetes-dashboard-6cff4c7c4f-565nn" (UID: "8fcab4a3-359e-4ad0-af24-5e7721efd26a")
Feb 26 22:48:03 minikube kubelet[3789]: W0226 22:48:03.377503 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f-565nn through plugin: invalid network status for
Feb 26 22:48:03 minikube kubelet[3789]: W0226 22:48:03.380031 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f-565nn through plugin: invalid network status for
Feb 26 22:48:03 minikube kubelet[3789]: W0226 22:48:03.380308 3789 pod_container_deletor.go:79] Container "6024fa425a380aa784b94e32553a9f3d2c34e621694703cce02c20328e122c6a" not found in pod's containers
Feb 26 22:48:03 minikube kubelet[3789]: W0226 22:48:03.491383 3789 pod_container_deletor.go:79] Container "af90f6708a69d7e9df9651b90d214baca757ac7cd3462713905c9b33372a9bd5" not found in pod's containers
Feb 26 22:48:03 minikube kubelet[3789]: W0226 22:48:03.494314 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-xbn89 through plugin: invalid network status for
Feb 26 22:48:04 minikube kubelet[3789]: W0226 22:48:04.503025 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-6cff4c7c4f-565nn through plugin: invalid network status for
Feb 26 22:48:04 minikube kubelet[3789]: W0226 22:48:04.557113 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-c95fcf479-xbn89 through plugin: invalid network status for
Feb 26 22:48:47 minikube kubelet[3789]: I0226 22:48:47.103545 3789 topology_manager.go:187] [topologymanager] Topology Admit Handler
Feb 26 22:48:47 minikube kubelet[3789]: I0226 22:48:47.196360 3789 topology_manager.go:187] [topologymanager] Topology Admit Handler
Feb 26 22:48:47 minikube kubelet[3789]: I0226 22:48:47.223108 3789 topology_manager.go:187] [topologymanager] Topology Admit Handler
Feb 26 22:48:47 minikube kubelet[3789]: I0226 22:48:47.229460 3789 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-webhook-cert") pod "ingress-nginx-controller-558664778f-5z2fh" (UID: "30c8096a-a249-4010-aa61-3a6c842ce296")
Feb 26 22:48:47 minikube kubelet[3789]: I0226 22:48:47.229585 3789 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ingress-nginx-token-5p2pt" (UniqueName: "kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-ingress-nginx-token-5p2pt") pod "ingress-nginx-controller-558664778f-5z2fh" (UID: "30c8096a-a249-4010-aa61-3a6c842ce296")
Feb 26 22:48:47 minikube kubelet[3789]: I0226 22:48:47.329902 3789 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ingress-nginx-admission-token-rr6m6" (UniqueName: "kubernetes.io/secret/76c63f0f-822f-4586-b090-3f0334231bb0-ingress-nginx-admission-token-rr6m6") pod "ingress-nginx-admission-patch-mwmhl" (UID: "76c63f0f-822f-4586-b090-3f0334231bb0")
Feb 26 22:48:47 minikube kubelet[3789]: I0226 22:48:47.330061 3789 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ingress-nginx-admission-token-rr6m6" (UniqueName: "kubernetes.io/secret/4c4e2659-ca22-44da-b112-de22b20fdc81-ingress-nginx-admission-token-rr6m6") pod "ingress-nginx-admission-create-bz9cm" (UID: "4c4e2659-ca22-44da-b112-de22b20fdc81")
Feb 26 22:48:47 minikube kubelet[3789]: E0226 22:48:47.330863 3789 secret.go:195] Couldn't get secret kube-system/ingress-nginx-admission: secret "ingress-nginx-admission" not found
Feb 26 22:48:47 minikube kubelet[3789]: E0226 22:48:47.331074 3789 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-webhook-cert podName:30c8096a-a249-4010-aa61-3a6c842ce296 nodeName:}" failed. No retries permitted until 2021-02-26 22:48:47.83099057 +0000 UTC m=+408.401269346 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-webhook-cert") pod "ingress-nginx-controller-558664778f-5z2fh" (UID: "30c8096a-a249-4010-aa61-3a6c842ce296") : secret "ingress-nginx-admission" not found"
Feb 26 22:48:47 minikube kubelet[3789]: E0226 22:48:47.832922 3789 secret.go:195] Couldn't get secret kube-system/ingress-nginx-admission: secret "ingress-nginx-admission" not found
Feb 26 22:48:47 minikube kubelet[3789]: E0226 22:48:47.833011 3789 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-webhook-cert podName:30c8096a-a249-4010-aa61-3a6c842ce296 nodeName:}" failed. No retries permitted until 2021-02-26 22:48:48.83299388 +0000 UTC m=+409.403272647 (durationBeforeRetry 1s). Error: "MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-webhook-cert") pod "ingress-nginx-controller-558664778f-5z2fh" (UID: "30c8096a-a249-4010-aa61-3a6c842ce296") : secret "ingress-nginx-admission" not found"
Feb 26 22:48:48 minikube kubelet[3789]: W0226 22:48:48.193610 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-admission-create-bz9cm through plugin: invalid network status for
Feb 26 22:48:48 minikube kubelet[3789]: W0226 22:48:48.195584 3789 pod_container_deletor.go:79] Container "6161ff40b8a63342f80db6da1ad8333348ce199c0855d08e0e9d707b39b3c4cb" not found in pod's containers
Feb 26 22:48:48 minikube kubelet[3789]: W0226 22:48:48.195612 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-admission-create-bz9cm through plugin: invalid network status for
Feb 26 22:48:48 minikube kubelet[3789]: W0226 22:48:48.287854 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-admission-patch-mwmhl through plugin: invalid network status for
Feb 26 22:48:48 minikube kubelet[3789]: W0226 22:48:48.288867 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-admission-patch-mwmhl through plugin: invalid network status for
Feb 26 22:48:48 minikube kubelet[3789]: W0226 22:48:48.301124 3789 pod_container_deletor.go:79] Container "9a3cae251b04f592bb769fa59ab79bf5b72e0ce9f3f56171aa5c605408c3f007" not found in pod's containers
Feb 26 22:48:48 minikube kubelet[3789]: E0226 22:48:48.838569 3789 secret.go:195] Couldn't get secret kube-system/ingress-nginx-admission: secret "ingress-nginx-admission" not found
Feb 26 22:48:48 minikube kubelet[3789]: E0226 22:48:48.838634 3789 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-webhook-cert podName:30c8096a-a249-4010-aa61-3a6c842ce296 nodeName:}" failed. No retries permitted until 2021-02-26 22:48:50.838616801 +0000 UTC m=+411.408895571 (durationBeforeRetry 2s). Error: "MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-webhook-cert") pod "ingress-nginx-controller-558664778f-5z2fh" (UID: "30c8096a-a249-4010-aa61-3a6c842ce296") : secret "ingress-nginx-admission" not found"
Feb 26 22:48:49 minikube kubelet[3789]: W0226 22:48:49.313302 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-admission-create-bz9cm through plugin: invalid network status for
Feb 26 22:48:49 minikube kubelet[3789]: W0226 22:48:49.317339 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-admission-patch-mwmhl through plugin: invalid network status for
Feb 26 22:48:50 minikube kubelet[3789]: E0226 22:48:50.855983 3789 secret.go:195] Couldn't get secret kube-system/ingress-nginx-admission: secret "ingress-nginx-admission" not found
Feb 26 22:48:50 minikube kubelet[3789]: E0226 22:48:50.856169 3789 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-webhook-cert podName:30c8096a-a249-4010-aa61-3a6c842ce296 nodeName:}" failed. No retries permitted until 2021-02-26 22:48:54.856141986 +0000 UTC m=+415.426420762 (durationBeforeRetry 4s). Error: "MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/30c8096a-a249-4010-aa61-3a6c842ce296-webhook-cert") pod "ingress-nginx-controller-558664778f-5z2fh" (UID: "30c8096a-a249-4010-aa61-3a6c842ce296") : secret "ingress-nginx-admission" not found"
Feb 26 22:48:53 minikube kubelet[3789]: W0226 22:48:53.373141 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-admission-create-bz9cm through plugin: invalid network status for
Feb 26 22:48:53 minikube kubelet[3789]: I0226 22:48:53.377514 3789 scope.go:95] [topologymanager] RemoveContainer - Container ID: 2fa0d369efb431c488bd4a0875e61d35a54f0945e99cce92725f0ba24d01bc5c
Feb 26 22:48:53 minikube kubelet[3789]: I0226 22:48:53.471163 3789 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-admission-token-rr6m6" (UniqueName: "kubernetes.io/secret/4c4e2659-ca22-44da-b112-de22b20fdc81-ingress-nginx-admission-token-rr6m6") pod "4c4e2659-ca22-44da-b112-de22b20fdc81" (UID: "4c4e2659-ca22-44da-b112-de22b20fdc81")
Feb 26 22:48:53 minikube kubelet[3789]: I0226 22:48:53.484902 3789 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c4e2659-ca22-44da-b112-de22b20fdc81-ingress-nginx-admission-token-rr6m6" (OuterVolumeSpecName: "ingress-nginx-admission-token-rr6m6") pod "4c4e2659-ca22-44da-b112-de22b20fdc81" (UID: "4c4e2659-ca22-44da-b112-de22b20fdc81"). InnerVolumeSpecName "ingress-nginx-admission-token-rr6m6". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 26 22:48:53 minikube kubelet[3789]: I0226 22:48:53.571927 3789 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-rr6m6" (UniqueName: "kubernetes.io/secret/4c4e2659-ca22-44da-b112-de22b20fdc81-ingress-nginx-admission-token-rr6m6") on node "minikube" DevicePath ""
Feb 26 22:48:54 minikube kubelet[3789]: W0226 22:48:54.401898 3789 pod_container_deletor.go:79] Container "6161ff40b8a63342f80db6da1ad8333348ce199c0855d08e0e9d707b39b3c4cb" not found in pod's containers
Feb 26 22:48:55 minikube kubelet[3789]: W0226 22:48:55.960330 3789 pod_container_deletor.go:79] Container "3bb79ac05d6a4d1cd675085e8d18e2491ffb7e39f750491bb737ea69d18fdaea" not found in pod's containers
Feb 26 22:48:55 minikube kubelet[3789]: W0226 22:48:55.965518 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-controller-558664778f-5z2fh through plugin: invalid network status for
Feb 26 22:48:56 minikube kubelet[3789]: W0226 22:48:56.970552 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-admission-patch-mwmhl through plugin: invalid network status for
Feb 26 22:48:56 minikube kubelet[3789]: I0226 22:48:56.981364 3789 scope.go:95] [topologymanager] RemoveContainer - Container ID: 74b5b88cd434777008b2b2213e06e296af625d69403a3432f3a5b9ae9583d623
Feb 26 22:48:56 minikube kubelet[3789]: W0226 22:48:56.987524 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-controller-558664778f-5z2fh through plugin: invalid network status for
Feb 26 22:48:57 minikube kubelet[3789]: I0226 22:48:57.090048 3789 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-admission-token-rr6m6" (UniqueName: "kubernetes.io/secret/76c63f0f-822f-4586-b090-3f0334231bb0-ingress-nginx-admission-token-rr6m6") pod "76c63f0f-822f-4586-b090-3f0334231bb0" (UID: "76c63f0f-822f-4586-b090-3f0334231bb0")
Feb 26 22:48:57 minikube kubelet[3789]: I0226 22:48:57.103762 3789 operation_generator.go:797] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76c63f0f-822f-4586-b090-3f0334231bb0-ingress-nginx-admission-token-rr6m6" (OuterVolumeSpecName: "ingress-nginx-admission-token-rr6m6") pod "76c63f0f-822f-4586-b090-3f0334231bb0" (UID: "76c63f0f-822f-4586-b090-3f0334231bb0"). InnerVolumeSpecName "ingress-nginx-admission-token-rr6m6". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 26 22:48:57 minikube kubelet[3789]: I0226 22:48:57.195853 3789 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-rr6m6" (UniqueName: "kubernetes.io/secret/76c63f0f-822f-4586-b090-3f0334231bb0-ingress-nginx-admission-token-rr6m6") on node "minikube" DevicePath ""
Feb 26 22:48:58 minikube kubelet[3789]: W0226 22:48:58.003432 3789 pod_container_deletor.go:79] Container "9a3cae251b04f592bb769fa59ab79bf5b72e0ce9f3f56171aa5c605408c3f007" not found in pod's containers
Feb 26 22:49:22 minikube kubelet[3789]: W0226 22:49:22.743582 3789 docker_sandbox.go:402] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/ingress-nginx-controller-558664778f-5z2fh through plugin: invalid network status for

==> kubernetes-dashboard [d14ec46fc23b] <==
2021/02/27 01:45:58 [2021-02-27T01:45:58Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:45:58 Getting list of namespaces
2021/02/27 01:45:58 [2021-02-27T01:45:58Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:03 [2021-02-27T01:46:03Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:03 Getting list of namespaces
2021/02/27 01:46:03 [2021-02-27T01:46:03Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:08 [2021-02-27T01:46:08Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:08 Getting list of namespaces
2021/02/27 01:46:08 [2021-02-27T01:46:08Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:13 [2021-02-27T01:46:13Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:13 Getting list of namespaces
2021/02/27 01:46:13 [2021-02-27T01:46:13Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:18 [2021-02-27T01:46:18Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:18 Getting list of namespaces
2021/02/27 01:46:18 [2021-02-27T01:46:18Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:23 [2021-02-27T01:46:23Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:23 Getting list of namespaces
2021/02/27 01:46:23 [2021-02-27T01:46:23Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:28 [2021-02-27T01:46:28Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:28 Getting list of namespaces
2021/02/27 01:46:28 [2021-02-27T01:46:28Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:33 [2021-02-27T01:46:33Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:33 Getting list of namespaces
2021/02/27 01:46:33 [2021-02-27T01:46:33Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:38 [2021-02-27T01:46:38Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:38 Getting list of namespaces
2021/02/27 01:46:38 [2021-02-27T01:46:38Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:43 [2021-02-27T01:46:43Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:43 Getting list of namespaces
2021/02/27 01:46:43 [2021-02-27T01:46:43Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:48 [2021-02-27T01:46:48Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:48 Getting list of namespaces
2021/02/27 01:46:48 [2021-02-27T01:46:48Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:53 [2021-02-27T01:46:53Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:53 Getting list of namespaces
2021/02/27 01:46:53 [2021-02-27T01:46:53Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:46:58 [2021-02-27T01:46:58Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:46:58 Getting list of namespaces
2021/02/27 01:46:58 [2021-02-27T01:46:58Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:47:03 [2021-02-27T01:47:03Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:47:03 Getting list of namespaces
2021/02/27 01:47:03 [2021-02-27T01:47:03Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:47:08 [2021-02-27T01:47:08Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:47:08 Getting list of namespaces
2021/02/27 01:47:08 [2021-02-27T01:47:08Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:47:13 [2021-02-27T01:47:13Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:47:13 Getting list of namespaces
2021/02/27 01:47:13 [2021-02-27T01:47:13Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:47:18 [2021-02-27T01:47:18Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:47:18 Getting list of namespaces
2021/02/27 01:47:18 [2021-02-27T01:47:18Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:47:23 [2021-02-27T01:47:23Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:47:23 Getting list of namespaces
2021/02/27 01:47:23 [2021-02-27T01:47:23Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:47:28 [2021-02-27T01:47:28Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:47:28 Getting list of namespaces
2021/02/27 01:47:28 [2021-02-27T01:47:28Z] Outcoming response to 127.0.0.1 with 200 status code
2021/02/27 01:47:33 [2021-02-27T01:47:33Z] Incoming HTTP/1.1 GET /api/v1/namespace request from 127.0.0.1:
2021/02/27 01:47:33 Getting list of namespaces
2021/02/27 01:47:33 [2021-02-27T01:47:33Z] Outcoming response to 127.0.0.1 with 200 status code

==> storage-provisioner [bbf7646ab193] <==
I0226 22:42:17.307620 1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
F0226 22:42:47.311287 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout

==> storage-provisioner [ee183d4537ae] <==
I0226 22:42:49.526847 1 storage_provisioner.go:115] Initializing the minikube storage provisioner...
I0226 22:42:49.543165 1 storage_provisioner.go:140] Storage provisioner initialized, now starting service!
I0226 22:42:49.543226 1 leaderelection.go:242] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
I0226 22:42:49.567641 1 leaderelection.go:252] successfully acquired lease kube-system/k8s.io-minikube-hostpath
I0226 22:42:49.568105 1 controller.go:799] Starting provisioner controller k8s.io/minikube-hostpath_minikube_eeb57339-f992-422c-be47-67bd55b943e5!
I0226 22:42:49.569516 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"11f8096e-78ab-48af-86ba-cc1ee55eacd4", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_eeb57339-f992-422c-be47-67bd55b943e5 became leader
I0226 22:42:49.668513 1 controller.go:848] Started provisioner controller k8s.io/minikube-hostpath_minikube_eeb57339-f992-422c-be47-67bd55b943e5!

@robd003
Copy link
Author

robd003 commented Feb 27, 2021

This issue persists even hours after the VM has been started. It seems like high CPU usage never stops for kube-apiserver and kubelet

@afbjorklund afbjorklund added co/hyperkit Hyperkit related issues os/macos labels Feb 27, 2021
@afbjorklund
Copy link
Collaborator

I think this is known, and the workaround is to use minikube pause when it is not in use.

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. area/performance Performance related issues priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Feb 27, 2021
@robd003
Copy link
Author

robd003 commented Feb 27, 2021

It would be great if we could change the default kubernetes polling intervals to something less aggressive. Maybe only look for updates once every 5 or 10 seconds?

@devZer0
Copy link

devZer0 commented Mar 23, 2021

canonical/microk8s#1567

@medyagh
Copy link
Member

medyagh commented Mar 29, 2021

@robd003 @devZer0 have you tried our Auto-Pause addon (currently only available on Docker Driver)

minikube addons enable auto-pause

@robd003
Copy link
Author

robd003 commented Mar 30, 2021

@medyagh The problem is that it's burning a ton of CPU while doing effectively nothing which wastes battery power on a laptop. It'd be great if we could tune kubernetes for minikube so that it would poll once a second rather than every 5ms.

@devZer0
Copy link

devZer0 commented Mar 31, 2021

@robd003 @devZer0 have you tried our Auto-Pause addon (currently only available on Docker Driver)

minikube addons enable auto-pause

no, i don't think this is an option.

i agree to @robd003 , i think it needs to be clarified if it's a bug or a feature, that minikube burns significant amount of cpu when idle/doing nothing - and if there can't be done something about it.

i know that there exists software which needs to use polling or similar for performance reason , but i really wonder if this is necessary for a container orchestration platform, especially when this is the "micro version" of such, which is especially meant for developers and often runs on battery powered devices.

burning cpu on laptop is not only wasting energy, it's avoilable stressing the device and it's battery, especially in summer. we have tons of laptops in our company which need service because of battery defects and the less software putting stress on battery, the longer they will last...

furthermore, we're in 2021 and i think we cannot affort wasting precious energy at a large scale, and kubernetes is software used at large scale, so we may have some tons of co2 which could be saved...

so when kubernetes (& derivates) are wasting energy by default and nobody can explain why this is really needed (i.e. intentional) - then we all should put work into this to adress that.

from my experience there often is lot's of inefficiency in big software stacks, and that's mostly because such stuff runs on big iron (where it doesn't matter) and/or nobody has time or knowledge (or even dares) to really have a close look.

@medyagh
Copy link
Member

medyagh commented Apr 7, 2021

@robd003 @devZer0 have you tried our Auto-Pause addon (currently only available on Docker Driver)
minikube addons enable auto-pause

no, i don't think this is an option.

i agree to @robd003 , i think it needs to be clarified if it's a bug or a feature, that minikube burns significant amount of cpu when idle/doing nothing - and if there can't be done something about it.

i know that there exists software which needs to use polling or similar for performance reason , but i really wonder if this is necessary for a container orchestration platform, especially when this is the "micro version" of such, which is especially meant for developers and often runs on battery powered devices.

burning cpu on laptop is not only wasting energy, it's avoilable stressing the device and it's battery, especially in summer. we have tons of laptops in our company which need service because of battery defects and the less software putting stress on battery, the longer they will last...

furthermore, we're in 2021 and i think we cannot affort wasting precious energy at a large scale, and kubernetes is software used at large scale, so we may have some tons of co2 which could be saved...

so when kubernetes (& derivates) are wasting energy by default and nobody can explain why this is really needed (i.e. intentional) - then we all should put work into this to adress that.

from my experience there often is lot's of inefficiency in big software stacks, and that's mostly because such stuff runs on big iron (where it doesn't matter) and/or nobody has time or knowledge (or even dares) to really have a close look.

@devZer0 @robd003 I agree with you, it is a shame how much energy we waste and contributing to the global warming,
during past year we tried in minikube and reduce the CPU by more than 50%
We identified the exact components of kubernetes that use most CPU, Priya Wadwha has a great talk about how we did that https://www.youtube.com/watch?v=tvreJem3xIw

but that is still not enough thats why we started a new feature on auto-pause (currently only on docker/podman driver)
you could use it on the minikube v1.19.0-beta0

the binaries for the beta release can be found here

https://storage.googleapis.com/minikube/latest/minikube-linux-amd64

The current design of auto-pause pause kubernetes when it is not in use, I would love love to hear your feedback of Auto-Pause addon !
if you wanted to give it a try and let me know (on minikube v1.19.0-beta0+) here is how to enable it

minikube start
minikube addons enable auto-pause

@medyagh
Copy link
Member

medyagh commented May 26, 2021

@devZer0 @robd003 have you had a chance to try minikube latest binary for the Auto-pause addon on VM Drivers?

curl -LO https://storage.googleapis.com/minikube/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
minikube delete --al
minikube start
minikube addons enable auto-pause

@robd003
Copy link
Author

robd003 commented May 27, 2021

@medyagh Yes I've tried the latest build; however, the problem isn't pausing kubernetes, but rather the high CPU usage when it's running. There's got to be a way to de-tune it and have polling happen once every 500ms - 1000ms instead of constantly pegging the CPU when running on a developer laptop.

I have no problem shutting down kubernetes when I don't need it. The issue that persists is kubernetes high load when doing the bare minimum of tasks.

@devZer0
Copy link

devZer0 commented May 27, 2021

i see it the same way as @robd003

the problem for me is not shutting it down , pausing or suspending. i could also pause/suspend the virtual machine.

minikube should not burn cpu when mostly idle.

if there is polling at a high rate it should be explained why it is absolutely necessary and if reducing polling rate could be a reasonable approach to lower cpu usage, then it should be made a tunable for developer laptops or lower-end machines.

@medyagh
Copy link
Member

medyagh commented Aug 11, 2021

unfortunately the CPU usage of Kuberentes itself is can not be changed by minikube, for that I strongly suggest creating an issue on kubernetes itself

meanwhile we have published bechmarks on minikube CPU usage which currently is at 9% at highest.

on linux https://minikube.sigs.k8s.io/docs/benchmarks/cpuusage/linux/
on mac os https://minikube.sigs.k8s.io/docs/benchmarks/cpuusage/macos/

with and without auto-pause

I will close this issue but please open again if anything on minkube side can be done. but I would love this discussion to be followed up in https://github.com/kubernetes/kubernetes

@medyagh medyagh closed this as completed Aug 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/performance Performance related issues co/hyperkit Hyperkit related issues kind/bug Categorizes issue or PR as related to a bug. os/macos priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

4 participants