Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dashboard: The ClusterRoleBinding "kubernetes-dashboard" is invalid: cannot change roleRef #7256

Closed
TroubleConsultant opened this issue Mar 26, 2020 · 12 comments
Labels
co/dashboard dashboard related issues good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@TroubleConsultant
Copy link

TroubleConsultant commented Mar 26, 2020

The exact command to reproduce the issue:
minikube dashboard

The full output of the command that failed:

🔌  Enabling dashboard ...

💣  Unable to enable dashboard: running callbacks: [addon apply: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
namespace/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf configured
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged

stderr:
The ClusterRoleBinding "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"cluster-admin"}: cannot change roleRef
]

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

The output of the minikube logs command:


==> Docker <==
-- Logs begin at Tue 2020-03-24 21:08:21 UTC, end at Thu 2020-03-26 13:08:12 UTC. --
Mar 25 17:17:28 minikube dockerd[2122]: time="2020-03-25T17:17:28.656372059Z" level=info msg="shim reaped" id=b607449315cb0041e1a3346df4d350da85b849872d63454ae9f9d757e71fc468
Mar 25 17:17:28 minikube dockerd[2122]: time="2020-03-25T17:17:28.666257793Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 17:17:28 minikube dockerd[2122]: time="2020-03-25T17:17:28.700614540Z" level=info msg="shim reaped" id=e46783859df74de74ed9e230bdeddf44802e57f337d1c6cfede6b12551c9a6d8
Mar 25 17:17:28 minikube dockerd[2122]: time="2020-03-25T17:17:28.709330855Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 17:17:28 minikube dockerd[2122]: time="2020-03-25T17:17:28.935641068Z" level=info msg="shim reaped" id=086d2a3cef337c26f17323d7604730eb60f602a3828131914032799fb15e2ae6
Mar 25 17:17:28 minikube dockerd[2122]: time="2020-03-25T17:17:28.946994916Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 17:17:28 minikube dockerd[2122]: time="2020-03-25T17:17:28.961166555Z" level=info msg="shim reaped" id=59c5b80309664bce34f090817bf3dacd86a33c4feda8e39ae55951956b6bfe30
Mar 25 17:17:28 minikube dockerd[2122]: time="2020-03-25T17:17:28.982054114Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 17:17:29 minikube dockerd[2122]: time="2020-03-25T17:17:29.084481360Z" level=info msg="shim reaped" id=a80936b350adf6b05e6fb37ff5049950cc9c0a5aca87f218c73400c1817c3fdc
Mar 25 17:17:29 minikube dockerd[2122]: time="2020-03-25T17:17:29.088984692Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 17:17:44 minikube dockerd[2122]: time="2020-03-25T17:17:44.665374918Z" level=info msg="Container 87aaa34d29c562eb01bd894f676319aa7b56185911e635f02c3b70f4f5afcc65 failed to exit within 30 seconds of signal 15 - using the force"
Mar 25 17:17:44 minikube dockerd[2122]: time="2020-03-25T17:17:44.707401061Z" level=info msg="Container a4e8367a33a3281661deb6ec7b9de0c1332b5a88a50ce4f282e4aa848f82b105 failed to exit within 30 seconds of signal 15 - using the force"
Mar 25 17:17:44 minikube dockerd[2122]: time="2020-03-25T17:17:44.844566464Z" level=info msg="shim reaped" id=87aaa34d29c562eb01bd894f676319aa7b56185911e635f02c3b70f4f5afcc65
Mar 25 17:17:44 minikube dockerd[2122]: time="2020-03-25T17:17:44.854335971Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 17:17:44 minikube dockerd[2122]: time="2020-03-25T17:17:44.935418665Z" level=info msg="shim reaped" id=a4e8367a33a3281661deb6ec7b9de0c1332b5a88a50ce4f282e4aa848f82b105
Mar 25 17:17:44 minikube dockerd[2122]: time="2020-03-25T17:17:44.945242817Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 17:17:45 minikube dockerd[2122]: time="2020-03-25T17:17:45.104133163Z" level=info msg="shim reaped" id=2df5d1048342572d458c66af3ae537a183f8d76b03d8c976a1118da92a7e62ab
Mar 25 17:17:45 minikube dockerd[2122]: time="2020-03-25T17:17:45.116282055Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 17:17:45 minikube dockerd[2122]: time="2020-03-25T17:17:45.321191941Z" level=info msg="shim reaped" id=3839b5e7d5ff05058dcc6b01eb301eb2e45c536985f9e711ec637b43690c83d4
Mar 25 17:17:45 minikube dockerd[2122]: time="2020-03-25T17:17:45.341016683Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 20:37:49 minikube dockerd[2122]: time="2020-03-25T20:37:49.974652588Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4853c9192e689ab81fd6d858c5131cc1191b541514d56e6e773dbfd87d7b3ac7/shim.sock" debug=false pid=21142
Mar 25 20:37:49 minikube dockerd[2122]: time="2020-03-25T20:37:49.977818889Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d16830a7d126920607c9954e709c8de68cfa5ed39c26d37f0602c62a66ef9436/shim.sock" debug=false pid=21146
Mar 25 20:37:56 minikube dockerd[2122]: time="2020-03-25T20:37:56.503971355Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a6647ffc56ed1c1b5f2367be6acd684739607c856278f9fc27900e28ea27cf00/shim.sock" debug=false pid=21267
Mar 25 20:37:59 minikube dockerd[2122]: time="2020-03-25T20:37:59.328078500Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/03f56401c80e0ac3e2625d1d1eb1b4bb701527b8235926c02bab5a228f0db0f2/shim.sock" debug=false pid=21346
Mar 25 20:39:17 minikube dockerd[2122]: time="2020-03-25T20:39:17.609383507Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/b5a6ec8163ae4420173555e684ab3c3ab9438f434f06cca0063858c4a1bf4b3c/shim.sock" debug=false pid=21812
Mar 25 20:39:17 minikube dockerd[2122]: time="2020-03-25T20:39:17.652109654Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e5401558ca2e400883a7b6bd1849f8e4e871bbcd86a23427c003b8b8e31f34b4/shim.sock" debug=false pid=21827
Mar 25 20:39:18 minikube dockerd[2122]: time="2020-03-25T20:39:18.437909497Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/210b53377cf2f0c2dc113bb355488524722d118a5529d83e529bc0d78c6a665b/shim.sock" debug=false pid=21918
Mar 25 20:39:18 minikube dockerd[2122]: time="2020-03-25T20:39:18.720619805Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ee5ffdc317e9f6c571bd077a2b1aa7dbc9027db869cdd85ad3847f734eed4521/shim.sock" debug=false pid=21951
Mar 25 20:39:20 minikube dockerd[2122]: time="2020-03-25T20:39:20.230760966Z" level=info msg="shim reaped" id=03f56401c80e0ac3e2625d1d1eb1b4bb701527b8235926c02bab5a228f0db0f2
Mar 25 20:39:20 minikube dockerd[2122]: time="2020-03-25T20:39:20.240860308Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 20:39:20 minikube dockerd[2122]: time="2020-03-25T20:39:20.273408715Z" level=info msg="shim reaped" id=a6647ffc56ed1c1b5f2367be6acd684739607c856278f9fc27900e28ea27cf00
Mar 25 20:39:20 minikube dockerd[2122]: time="2020-03-25T20:39:20.282995679Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 20:39:20 minikube dockerd[2122]: time="2020-03-25T20:39:20.524249585Z" level=info msg="shim reaped" id=4853c9192e689ab81fd6d858c5131cc1191b541514d56e6e773dbfd87d7b3ac7
Mar 25 20:39:20 minikube dockerd[2122]: time="2020-03-25T20:39:20.534761767Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 25 20:39:20 minikube dockerd[2122]: time="2020-03-25T20:39:20.547928977Z" level=info msg="shim reaped" id=d16830a7d126920607c9954e709c8de68cfa5ed39c26d37f0602c62a66ef9436
Mar 25 20:39:20 minikube dockerd[2122]: time="2020-03-25T20:39:20.566634400Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 26 12:34:15 minikube dockerd[2122]: time="2020-03-26T12:34:15.863962299Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/7b7078e38615f92ea7433aeefde2859f4d18f17a6ee0fa3013cc7bb8ff6257dd/shim.sock" debug=false pid=6330
Mar 26 12:34:16 minikube dockerd[2122]: time="2020-03-26T12:34:16.064969803Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fd1591707357a24eb58fb97c8e8818b10e08de745d6853ba3a9280af23c3db70/shim.sock" debug=false pid=6367
Mar 26 12:34:17 minikube dockerd[2122]: time="2020-03-26T12:34:17.157033738Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/4ed46655f273b36a05dfdee96d1188189fca2818385d478e28ed3d049191fc9a/shim.sock" debug=false pid=6448
Mar 26 12:34:17 minikube dockerd[2122]: time="2020-03-26T12:34:17.751195367Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/392cfaf788c0cd0d6c514a09a4bd933c7606b00d2f7d527b60525e7776498a63/shim.sock" debug=false pid=6491
Mar 26 12:34:19 minikube dockerd[2122]: time="2020-03-26T12:34:19.464098588Z" level=info msg="shim reaped" id=210b53377cf2f0c2dc113bb355488524722d118a5529d83e529bc0d78c6a665b
Mar 26 12:34:19 minikube dockerd[2122]: time="2020-03-26T12:34:19.473906612Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 26 12:34:19 minikube dockerd[2122]: time="2020-03-26T12:34:19.508943115Z" level=info msg="shim reaped" id=ee5ffdc317e9f6c571bd077a2b1aa7dbc9027db869cdd85ad3847f734eed4521
Mar 26 12:34:19 minikube dockerd[2122]: time="2020-03-26T12:34:19.519312702Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 26 12:34:19 minikube dockerd[2122]: time="2020-03-26T12:34:19.732467118Z" level=info msg="shim reaped" id=e5401558ca2e400883a7b6bd1849f8e4e871bbcd86a23427c003b8b8e31f34b4
Mar 26 12:34:19 minikube dockerd[2122]: time="2020-03-26T12:34:19.747069537Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 26 12:34:19 minikube dockerd[2122]: time="2020-03-26T12:34:19.763527958Z" level=info msg="shim reaped" id=b5a6ec8163ae4420173555e684ab3c3ab9438f434f06cca0063858c4a1bf4b3c
Mar 26 12:34:19 minikube dockerd[2122]: time="2020-03-26T12:34:19.816588967Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 26 12:34:31 minikube dockerd[2122]: time="2020-03-26T12:34:31.997716972Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0d8885f8cd4633a8bdb5e09cfbe97ed27f39eabc769e2153a2c458a95e87e12b/shim.sock" debug=false pid=6845
Mar 26 12:34:32 minikube dockerd[2122]: time="2020-03-26T12:34:32.148943334Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ee9588f5f64a0ba56609c3895a2775ad2683390e42bf3bbc0366b6376d1d7343/shim.sock" debug=false pid=6871
Mar 26 12:34:32 minikube dockerd[2122]: time="2020-03-26T12:34:32.748197038Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/537280ecbd885cdf4e9cb32caab51b69e8cf2ae0d38c448816826a88173da5cd/shim.sock" debug=false pid=6946
Mar 26 12:34:33 minikube dockerd[2122]: time="2020-03-26T12:34:33.449825099Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9da239236edf926af489df0b2d82b2ed89757be7b3c45ba772baa8a6784cc921/shim.sock" debug=false pid=7014
Mar 26 12:34:33 minikube dockerd[2122]: time="2020-03-26T12:34:33.793331639Z" level=info msg="shim reaped" id=392cfaf788c0cd0d6c514a09a4bd933c7606b00d2f7d527b60525e7776498a63
Mar 26 12:34:33 minikube dockerd[2122]: time="2020-03-26T12:34:33.802997927Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 26 12:34:34 minikube dockerd[2122]: time="2020-03-26T12:34:34.091347942Z" level=info msg="shim reaped" id=7b7078e38615f92ea7433aeefde2859f4d18f17a6ee0fa3013cc7bb8ff6257dd
Mar 26 12:34:34 minikube dockerd[2122]: time="2020-03-26T12:34:34.099326309Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 26 12:34:36 minikube dockerd[2122]: time="2020-03-26T12:34:36.114496846Z" level=info msg="shim reaped" id=4ed46655f273b36a05dfdee96d1188189fca2818385d478e28ed3d049191fc9a
Mar 26 12:34:36 minikube dockerd[2122]: time="2020-03-26T12:34:36.123211787Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Mar 26 12:34:36 minikube dockerd[2122]: time="2020-03-26T12:34:36.311825964Z" level=info msg="shim reaped" id=fd1591707357a24eb58fb97c8e8818b10e08de745d6853ba3a9280af23c3db70
Mar 26 12:34:36 minikube dockerd[2122]: time="2020-03-26T12:34:36.322019189Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
9da239236edf9 3b08661dc379d 33 minutes ago Running dashboard-metrics-scraper 0 0d8885f8cd463
537280ecbd885 eb51a35975256 33 minutes ago Running kubernetes-dashboard 0 ee9588f5f64a0
f7e4717aa9022 nginxdemos/hello@sha256:f5a0b2a5fe9af497c4a7c186ef6412bb91ff19d39d6ac24a4997eaed2b0bb334 22 hours ago Running hello 0 2a380588f2f51
2df8419c53a2a 70f311871ae12 40 hours ago Running coredns 0 d3240e1714f9a
2b4c17c697a9d 70f311871ae12 40 hours ago Running coredns 0 23dad38eabfdd
4c365562a8fd0 4689081edb103 40 hours ago Running storage-provisioner 0 2c7aa1d69e5c7
c9896695474f3 ae853e93800dc 40 hours ago Running kube-proxy 0 d9d0f6cc58128
5a66b9ddb91ea d109c0821a2b9 40 hours ago Running kube-scheduler 0 c6aa6a1dd2a47
09627f3a38c7d 303ce5db0e90d 40 hours ago Running etcd 0 ab841ec53b59e
d3401a75d11f7 b0f1517c1f4bb 40 hours ago Running kube-controller-manager 0 a53244d39e2d1
935a35190b21f 90d27391b7808 40 hours ago Running kube-apiserver 0 89dbdc9136484

==> coredns [2b4c17c697a9] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:55865->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:52506->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:60328->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:50666->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:48451->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:38462->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:60554->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:53799->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:59856->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 1593394122239442072.4392389224673623765. HINFO: read udp 172.17.0.2:56661->192.168.64.1:53: i/o timeout

==> coredns [2df8419c53a2] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:48486->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:59494->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:52823->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:41420->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:53872->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:43389->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:60399->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:56033->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:43585->192.168.64.1:53: i/o timeout
[ERROR] plugin/errors: 2 2597924505710762564.6984974974457404266. HINFO: read udp 172.17.0.3:55260->192.168.64.1:53: i/o timeout

==> dmesg <==
[Mar26 07:47] ERROR: earlyprintk= earlyser already used
[ +0.000000] You have booted with nomodeset. This means your GPU drivers are DISABLED
[ +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
[ +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
[ +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20180810/tbprint-177)
[ +0.000000] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[ +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[ +0.011605] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[ +3.271677] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
[ +0.013059] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
[ +0.002819] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[ +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[ +1.368301] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[ +0.790779] vboxguest: loading out-of-tree module taints kernel.
[ +0.005252] vboxguest: PCI device not found, probably running on physical hardware.
[Mar26 07:48] systemd-fstab-generator[1877]: Ignoring "noauto" for root device
[ +0.240782] systemd-fstab-generator[1894]: Ignoring "noauto" for root device
[ +0.212327] systemd-fstab-generator[1910]: Ignoring "noauto" for root device
[ +8.694571] hrtimer: interrupt took 3820499 ns
[ +22.266528] kauditd_printk_skb: 65 callbacks suppressed
[ +1.077392] systemd-fstab-generator[2321]: Ignoring "noauto" for root device
[ +3.079561] systemd-fstab-generator[2537]: Ignoring "noauto" for root device
[ +12.327765] kauditd_printk_skb: 107 callbacks suppressed
[Mar26 07:49] systemd-fstab-generator[3590]: Ignoring "noauto" for root device
[ +8.128202] kauditd_printk_skb: 32 callbacks suppressed
[ +6.166475] kauditd_printk_skb: 41 callbacks suppressed
[Mar26 07:50] NFSD: Unable to end grace period: -110
[Mar26 07:59] clocksource: timekeeping watchdog on CPU1: Marking clocksource 'tsc' as unstable because the skew is too large:
[ +0.000181] clocksource: 'hpet' wd_now: 6414290c wd_last: 6325a6a4 mask: ffffffff
[ +0.000057] clocksource: 'tsc' cs_now: 916831cbf7d cs_last: 8f6d8014024 mask: ffffffffffffffff
[ +0.000966] TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
[ +1.392342] kauditd_printk_skb: 2 callbacks suppressed
[ +6.144681] kauditd_printk_skb: 2 callbacks suppressed
[Mar26 08:59] kauditd_printk_skb: 2 callbacks suppressed
[Mar26 09:42] kauditd_printk_skb: 44 callbacks suppressed
[Mar26 10:05] kauditd_printk_skb: 2 callbacks suppressed
[Mar26 10:06] kauditd_printk_skb: 8 callbacks suppressed
[ +43.264932] kauditd_printk_skb: 2 callbacks suppressed
[Mar26 10:07] kauditd_printk_skb: 2 callbacks suppressed
[Mar26 10:09] kauditd_printk_skb: 8 callbacks suppressed
[ +5.162660] kauditd_printk_skb: 18 callbacks suppressed
[Mar26 10:10] kauditd_printk_skb: 20 callbacks suppressed
[Mar26 10:35] kauditd_printk_skb: 14 callbacks suppressed
[ +13.580249] kauditd_printk_skb: 6 callbacks suppressed
[Mar26 10:36] kauditd_printk_skb: 12 callbacks suppressed
[ +6.270311] kauditd_printk_skb: 2 callbacks suppressed
[Mar26 11:17] kauditd_printk_skb: 8 callbacks suppressed
[ +8.566429] kauditd_printk_skb: 14 callbacks suppressed
[Mar26 11:19] kauditd_printk_skb: 2 callbacks suppressed
[Mar26 12:19] kauditd_printk_skb: 27 callbacks suppressed
[Mar26 12:34] kauditd_printk_skb: 33 callbacks suppressed

==> kernel <==
13:08:12 up 5:20, 0 users, load average: 0.99, 0.94, 0.76
Linux minikube 4.19.94 #1 SMP Fri Mar 6 11:41:28 PST 2020 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.9"

==> kube-apiserver [935a35190b21] <==
I0325 18:14:03.810934 1 trace.go:116] Trace[1357377354]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 18:14:03.084183128 +0000 UTC m=+11625.711680518) (total time: 726.681277ms):
Trace[1357377354]: [726.550243ms] [726.488075ms] About to write a response
I0325 19:46:30.934950 1 trace.go:116] Trace[324383415]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-03-25 19:46:30.033931192 +0000 UTC m=+11678.550853714) (total time: 900.934008ms):
Trace[324383415]: [900.795877ms] [899.711419ms] Transaction committed
I0325 19:46:31.167243 1 trace.go:116] Trace[1174349058]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 19:46:30.041557377 +0000 UTC m=+11678.558479100) (total time: 1.125602117s):
Trace[1174349058]: [1.125466885s] [1.125419709s] About to write a response
I0325 19:46:31.178062 1 trace.go:116] Trace[736276132]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 19:46:30.033267524 +0000 UTC m=+11678.550189247) (total time: 1.144683872s):
Trace[736276132]: [901.781583ms] [901.195676ms] Object stored in database
Trace[736276132]: [1.144683872s] [242.902289ms] END
E0325 19:59:45.625556 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
I0325 20:09:31.395645 1 trace.go:116] Trace[1777785492]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 20:09:30.583129174 +0000 UTC m=+12347.516117814) (total time: 812.43745ms):
Trace[1777785492]: [812.261734ms] [811.883736ms] About to write a response
I0325 20:09:31.400338 1 trace.go:116] Trace[126718236]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 20:09:30.579286365 +0000 UTC m=+12347.512276704) (total time: 820.979138ms):
Trace[126718236]: [820.833091ms] [820.61782ms] About to write a response
I0325 20:10:24.315227 1 trace.go:116] Trace[955138177]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-03-25 20:10:23.494274026 +0000 UTC m=+12400.427263571) (total time: 820.882586ms):
Trace[955138177]: [820.798537ms] [819.776555ms] Transaction committed
I0325 20:10:24.315887 1 trace.go:116] Trace[1981375544]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 20:10:23.493939728 +0000 UTC m=+12400.426928574) (total time: 821.580864ms):
Trace[1981375544]: [821.475628ms] [821.22418ms] Object stored in database
I0325 20:10:24.315240 1 trace.go:116] Trace[1206825847]: "List etcd3" key:/cronjobs,resourceVersion:,limit:500,continue: (started: 2020-03-25 20:10:23.491459629 +0000 UTC m=+12400.424451173) (total time: 823.719769ms):
Trace[1206825847]: [823.719769ms] [823.719769ms] END
I0325 20:10:24.324792 1 trace.go:116] Trace[503881549]: "List" url:/apis/batch/v1beta1/cronjobs,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/system:serviceaccount:kube-system:cronjob-controller,client:127.0.0.1 (started: 2020-03-25 20:10:23.491332706 +0000 UTC m=+12400.424323750) (total time: 833.373659ms):
Trace[503881549]: [833.132304ms] [833.042558ms] Listing from storage done
I0325 20:10:24.327076 1 trace.go:116] Trace[1988687727]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 20:10:23.50215016 +0000 UTC m=+12400.435141704) (total time: 824.820441ms):
Trace[1988687727]: [824.396197ms] [824.34143ms] About to write a response
I0325 20:39:16.560233 1 controller.go:606] quota admission added evaluator for: namespaces
E0325 21:04:11.328572 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
I0325 21:15:34.213831 1 trace.go:116] Trace[1354719533]: "Get" url:/api/v1/namespaces/default/services/kubernetes,user-agent:kube-apiserver/v1.17.3 (linux/amd64) kubernetes/06ad960,client:127.0.0.1 (started: 2020-03-25 21:15:33.37208925 +0000 UTC m=+14802.715340447) (total time: 841.662478ms):
Trace[1354719533]: [841.49801ms] [841.458043ms] About to write a response
I0325 21:15:34.221614 1 trace.go:116] Trace[680436599]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 21:15:33.378149214 +0000 UTC m=+14802.721400112) (total time: 843.392445ms):
Trace[680436599]: [843.265847ms] [842.873767ms] About to write a response
I0325 21:16:27.070197 1 trace.go:116] Trace[172230276]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-03-25 21:16:26.274943443 +0000 UTC m=+14855.618194644) (total time: 795.165617ms):
Trace[172230276]: [795.082764ms] [794.169375ms] Transaction committed
I0325 21:16:27.070863 1 trace.go:116] Trace[160580116]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 21:16:26.291916762 +0000 UTC m=+14855.635652191) (total time: 778.39159ms):
Trace[160580116]: [778.246471ms] [778.187204ms] About to write a response
I0325 21:16:27.074542 1 trace.go:116] Trace[1031012000]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 21:16:26.274665998 +0000 UTC m=+14855.617915599) (total time: 799.798838ms):
Trace[1031012000]: [795.619465ms] [795.409182ms] Object stored in database
I0325 22:09:05.659854 1 trace.go:116] Trace[2128933055]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 22:09:04.830976821 +0000 UTC m=+14908.415437821) (total time: 828.799456ms):
Trace[2128933055]: [828.660811ms] [828.586298ms] About to write a response
I0325 22:09:05.663444 1 trace.go:116] Trace[282592137]: "Get" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 22:09:04.824363713 +0000 UTC m=+14908.408824413) (total time: 839.007831ms):
Trace[282592137]: [838.889864ms] [838.810456ms] About to write a response
I0325 22:09:58.514289 1 trace.go:116] Trace[1376491261]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-03-25 22:09:57.772444179 +0000 UTC m=+14961.356904287) (total time: 741.777254ms):
Trace[1376491261]: [741.703493ms] [739.550616ms] Transaction committed
I0325 22:09:58.514964 1 trace.go:116] Trace[387689644]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 22:09:57.771642198 +0000 UTC m=+14961.356102106) (total time: 743.256383ms):
Trace[387689644]: [743.016408ms] [742.405727ms] Object stored in database
I0325 22:09:58.520048 1 trace.go:116] Trace[1429889543]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-25 22:09:57.78660249 +0000 UTC m=+14961.371062998) (total time: 733.370849ms):
Trace[1429889543]: [733.201737ms] [733.144867ms] About to write a response
I0326 01:49:07.540554 1 trace.go:116] Trace[1660109603]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-26 01:49:06.75240246 +0000 UTC m=+15014.166771868) (total time: 788.065071ms):
Trace[1660109603]: [787.913047ms] [787.672967ms] About to write a response
I0326 01:49:07.541617 1 trace.go:116] Trace[532109692]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-26 01:49:06.745259833 +0000 UTC m=+15014.159624144) (total time: 796.277962ms):
Trace[532109692]: [794.353125ms] [794.289257ms] About to write a response
I0326 04:28:54.716799 1 trace.go:116] Trace[11474487]: "GuaranteedUpdate etcd3" type:*core.Endpoints (started: 2020-03-26 04:28:53.893125457 +0000 UTC m=+15067.016293435) (total time: 823.602949ms):
Trace[11474487]: [823.524445ms] [822.383699ms] Transaction committed
I0326 04:28:54.717082 1 trace.go:116] Trace[1345413483]: "Update" url:/api/v1/namespaces/kube-system/endpoints/kube-controller-manager,user-agent:kube-controller-manager/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-26 04:28:53.892775503 +0000 UTC m=+15067.015942981) (total time: 824.239852ms):
Trace[1345413483]: [824.118201ms] [823.86762ms] Object stored in database
I0326 04:28:54.718365 1 trace.go:116] Trace[1016613039]: "Get" url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler,user-agent:kube-scheduler/v1.17.3 (linux/amd64) kubernetes/06ad960/leader-election,client:127.0.0.1 (started: 2020-03-26 04:28:53.917439778 +0000 UTC m=+15067.040607756) (total time: 800.842423ms):
Trace[1016613039]: [800.18433ms] [800.120512ms] About to write a response
E0326 12:01:07.278543 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0326 12:13:53.399360 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0326 12:48:18.506826 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0326 12:55:05.628911 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-controller-manager [d3401a75d11f] <==
I0325 15:23:57.414387 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"15042", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: web1-74c74989b5-mzjz5
I0325 15:46:21.405041 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"redis-master", UID:"7f0da32f-fbdb-4577-bee3-0d5d89b11b1a", APIVersion:"apps/v1", ResourceVersion:"18194", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redis-master-7db7f6579f to 1
I0325 15:46:21.445163 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"redis-master-7db7f6579f", UID:"1906f627-4901-4d05-ad93-fd30e0a02853", APIVersion:"apps/v1", ResourceVersion:"18195", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-master-7db7f6579f-tltv7
I0325 15:47:18.827402 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"redis-slave", UID:"3ccf3894-103e-46c1-813a-c8b5daae57ad", APIVersion:"apps/v1", ResourceVersion:"18341", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set redis-slave-7664787fbc to 2
I0325 15:47:18.848486 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"redis-slave-7664787fbc", UID:"75393062-0638-46ff-b202-bda4d1936cd3", APIVersion:"apps/v1", ResourceVersion:"18342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-7664787fbc-qq5sn
I0325 15:47:18.899062 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"redis-slave-7664787fbc", UID:"75393062-0638-46ff-b202-bda4d1936cd3", APIVersion:"apps/v1", ResourceVersion:"18342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: redis-slave-7664787fbc-7m5fk
I0325 15:48:02.051313 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"frontend", UID:"63d5e84d-0011-47b9-b712-11ed90ef5ae3", APIVersion:"apps/v1", ResourceVersion:"18469", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set frontend-6cb7f8bd65 to 3
I0325 15:48:02.074718 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"frontend-6cb7f8bd65", UID:"0dcd1571-5313-4887-bbf9-4c09ba413e3e", APIVersion:"apps/v1", ResourceVersion:"18470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6cb7f8bd65-ppm8c
I0325 15:48:02.122617 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"frontend-6cb7f8bd65", UID:"0dcd1571-5313-4887-bbf9-4c09ba413e3e", APIVersion:"apps/v1", ResourceVersion:"18470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6cb7f8bd65-6cj4z
I0325 15:48:02.161156 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"frontend-6cb7f8bd65", UID:"0dcd1571-5313-4887-bbf9-4c09ba413e3e", APIVersion:"apps/v1", ResourceVersion:"18470", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: frontend-6cb7f8bd65-rjvw7
I0325 15:50:05.016942 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"web1", UID:"a5d4c46f-2bd9-40ab-a246-dc1e651aa997", APIVersion:"apps/v1", ResourceVersion:"18786", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set web1-74c74989b5 to 5
I0325 15:50:05.104954 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-xthj8
I0325 15:50:05.161145 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-clm8n
I0325 15:50:05.176064 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-fwcm4
I0325 15:50:05.195583 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-8b56n
I0325 15:50:05.210275 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-hg5k4
I0325 15:50:05.210436 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-dxgdb
I0325 15:50:05.212287 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-mzjz5
I0325 15:50:05.226204 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-zslh2
I0325 15:50:05.232711 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-xmpq6
I0325 15:50:05.236526 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-r8jl6
I0325 15:50:05.250661 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-wlgtw
I0325 15:50:05.251893 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-6fqxw
I0325 15:50:05.264678 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-zqc2k
I0325 15:50:05.285228 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-99gkc
I0325 15:50:05.316553 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"18787", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-kl6vd
I0325 15:51:06.027240 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"web1", UID:"a5d4c46f-2bd9-40ab-a246-dc1e651aa997", APIVersion:"apps/v1", ResourceVersion:"19017", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set web1-74c74989b5 to 1
I0325 15:51:06.054800 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"19018", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-kjx4f
I0325 15:51:06.069106 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"19018", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-8fb7v
I0325 15:51:06.086504 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"19018", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-7pmp6
I0325 15:51:06.157068 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"web1-74c74989b5", UID:"3ffd996a-4d2d-488d-877a-d8769e9730cb", APIVersion:"apps/v1", ResourceVersion:"19018", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: web1-74c74989b5-zkrzm
I0325 17:17:28.083846 1 event.go:281] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"default", Name:"frontend", UID:"e166e631-bdcc-47be-93b9-a2c3a1745f9e", APIVersion:"v1", ResourceVersion:"18601", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpoint' Failed to update endpoint default/frontend: Operation cannot be fulfilled on endpoints "frontend": the object has been modified; please apply your changes to the latest version and try again
I0325 20:37:48.771751 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"4cea54c0-d7ed-4a83-9a4c-ea026a41387d", APIVersion:"apps/v1", ResourceVersion:"27859", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-b65488c4 to 1
I0325 20:37:48.829686 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-b65488c4", UID:"561c1132-fa15-4436-92fa-884939ba7852", APIVersion:"apps/v1", ResourceVersion:"27860", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-b65488c4-zz4zq
I0325 20:37:48.945924 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"743694be-dd33-48a5-bb2c-1bcdf95706fc", APIVersion:"apps/v1", ResourceVersion:"27870", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-76585494d8 to 1
I0325 20:37:49.054994 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-76585494d8", UID:"728f5ada-a7bf-4a81-8756-70a52c3f5e31", APIVersion:"apps/v1", ResourceVersion:"27872", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-76585494d8-8wfjh
I0325 20:39:16.715406 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"743694be-dd33-48a5-bb2c-1bcdf95706fc", APIVersion:"apps/v1", ResourceVersion:"28101", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7b64584c5c to 1
I0325 20:39:16.733872 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"4cea54c0-d7ed-4a83-9a4c-ea026a41387d", APIVersion:"apps/v1", ResourceVersion:"28102", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-79d9cd965 to 1
I0325 20:39:16.792773 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7b64584c5c", UID:"1b474a16-f56f-4660-9628-834ff69dd853", APIVersion:"apps/v1", ResourceVersion:"28103", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7b64584c5c-n6hd4
I0325 20:39:16.792873 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-79d9cd965", UID:"ee55e6bd-8137-4ef4-9eb3-080b634d6a00", APIVersion:"apps/v1", ResourceVersion:"28104", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-79d9cd965-whpst
I0325 20:39:19.659203 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"743694be-dd33-48a5-bb2c-1bcdf95706fc", APIVersion:"apps/v1", ResourceVersion:"28127", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set dashboard-metrics-scraper-76585494d8 to 0
I0325 20:39:19.694857 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-76585494d8", UID:"728f5ada-a7bf-4a81-8756-70a52c3f5e31", APIVersion:"apps/v1", ResourceVersion:"28150", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: dashboard-metrics-scraper-76585494d8-8wfjh
I0325 20:39:19.815482 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"4cea54c0-d7ed-4a83-9a4c-ea026a41387d", APIVersion:"apps/v1", ResourceVersion:"28124", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set kubernetes-dashboard-b65488c4 to 0
I0325 20:39:19.892424 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-b65488c4", UID:"561c1132-fa15-4436-92fa-884939ba7852", APIVersion:"apps/v1", ResourceVersion:"28164", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kubernetes-dashboard-b65488c4-zz4zq
I0326 12:34:15.045561 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"4cea54c0-d7ed-4a83-9a4c-ea026a41387d", APIVersion:"apps/v1", ResourceVersion:"37869", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5996555fd8 to 1
I0326 12:34:15.089920 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5996555fd8", UID:"79823ef8-c670-4da9-889d-433a1ca75012", APIVersion:"apps/v1", ResourceVersion:"37870", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5996555fd8-4fl6v
I0326 12:34:15.333734 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"743694be-dd33-48a5-bb2c-1bcdf95706fc", APIVersion:"apps/v1", ResourceVersion:"37883", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-76585494d8 to 1
I0326 12:34:15.348524 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-76585494d8", UID:"728f5ada-a7bf-4a81-8756-70a52c3f5e31", APIVersion:"apps/v1", ResourceVersion:"37886", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-76585494d8-z9mvr
I0326 12:34:18.838237 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"4cea54c0-d7ed-4a83-9a4c-ea026a41387d", APIVersion:"apps/v1", ResourceVersion:"37882", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set kubernetes-dashboard-79d9cd965 to 0
I0326 12:34:18.911646 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-79d9cd965", UID:"ee55e6bd-8137-4ef4-9eb3-080b634d6a00", APIVersion:"apps/v1", ResourceVersion:"37915", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kubernetes-dashboard-79d9cd965-whpst
I0326 12:34:18.979510 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"743694be-dd33-48a5-bb2c-1bcdf95706fc", APIVersion:"apps/v1", ResourceVersion:"37898", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set dashboard-metrics-scraper-7b64584c5c to 0
I0326 12:34:19.110386 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7b64584c5c", UID:"1b474a16-f56f-4660-9628-834ff69dd853", APIVersion:"apps/v1", ResourceVersion:"37926", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: dashboard-metrics-scraper-7b64584c5c-n6hd4
I0326 12:34:31.117484 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"743694be-dd33-48a5-bb2c-1bcdf95706fc", APIVersion:"apps/v1", ResourceVersion:"37971", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7b64584c5c to 1
I0326 12:34:31.165607 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7b64584c5c", UID:"1b474a16-f56f-4660-9628-834ff69dd853", APIVersion:"apps/v1", ResourceVersion:"37975", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7b64584c5c-grs4z
I0326 12:34:31.345489 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"4cea54c0-d7ed-4a83-9a4c-ea026a41387d", APIVersion:"apps/v1", ResourceVersion:"37973", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-79d9cd965 to 1
I0326 12:34:31.404992 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-79d9cd965", UID:"ee55e6bd-8137-4ef4-9eb3-080b634d6a00", APIVersion:"apps/v1", ResourceVersion:"37990", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-79d9cd965-cnhxd
I0326 12:34:33.336919 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"4cea54c0-d7ed-4a83-9a4c-ea026a41387d", APIVersion:"apps/v1", ResourceVersion:"38003", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set kubernetes-dashboard-5996555fd8 to 0
I0326 12:34:33.392627 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5996555fd8", UID:"79823ef8-c670-4da9-889d-433a1ca75012", APIVersion:"apps/v1", ResourceVersion:"38022", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: kubernetes-dashboard-5996555fd8-4fl6v
I0326 12:34:35.744456 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"743694be-dd33-48a5-bb2c-1bcdf95706fc", APIVersion:"apps/v1", ResourceVersion:"37988", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled down replica set dashboard-metrics-scraper-76585494d8 to 0
I0326 12:34:35.849627 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-76585494d8", UID:"728f5ada-a7bf-4a81-8756-70a52c3f5e31", APIVersion:"apps/v1", ResourceVersion:"38044", FieldPath:""}): type: 'Normal' reason: 'SuccessfulDelete' Deleted pod: dashboard-metrics-scraper-76585494d8-z9mvr

==> kube-proxy [c9896695474f] <==
W0324 21:09:36.905229 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0324 21:09:36.920472 1 node.go:135] Successfully retrieved node IP: 192.168.64.2
I0324 21:09:36.920963 1 server_others.go:145] Using iptables Proxier.
W0324 21:09:36.921651 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0324 21:09:36.922809 1 server.go:571] Version: v1.17.3
I0324 21:09:36.924166 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0324 21:09:36.924697 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0324 21:09:36.925926 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0324 21:09:36.933180 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0324 21:09:36.933308 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0324 21:09:36.933991 1 config.go:313] Starting service config controller
I0324 21:09:36.934171 1 shared_informer.go:197] Waiting for caches to sync for service config
I0324 21:09:36.934009 1 config.go:131] Starting endpoints config controller
I0324 21:09:36.934443 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0324 21:09:37.036852 1 shared_informer.go:204] Caches are synced for service config
I0324 21:09:37.037721 1 shared_informer.go:204] Caches are synced for endpoints config

==> kube-scheduler [5a66b9ddb91e] <==
I0324 21:09:19.247045 1 serving.go:312] Generated self-signed cert in-memory
W0324 21:09:20.279133 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0324 21:09:20.279470 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0324 21:09:25.627962 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0324 21:09:25.628239 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0324 21:09:25.628496 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W0324 21:09:25.628681 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W0324 21:09:25.704740 1 authorization.go:47] Authorization is disabled
W0324 21:09:25.704765 1 authentication.go:92] Authentication is disabled
I0324 21:09:25.704779 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0324 21:09:25.712904 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0324 21:09:25.713773 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0324 21:09:25.713819 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0324 21:09:25.713851 1 tlsconfig.go:219] Starting DynamicServingCertificateController
E0324 21:09:25.729938 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0324 21:09:25.730299 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0324 21:09:25.732468 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0324 21:09:25.732933 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0324 21:09:25.733406 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0324 21:09:25.733699 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0324 21:09:25.735729 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0324 21:09:25.736338 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0324 21:09:25.736659 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0324 21:09:25.736928 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0324 21:09:25.737200 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0324 21:09:25.737493 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0324 21:09:26.732427 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0324 21:09:26.736390 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0324 21:09:26.738207 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0324 21:09:26.739953 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0324 21:09:26.741505 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0324 21:09:26.744469 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0324 21:09:26.746694 1 reflector.go:153] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0324 21:09:26.747404 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0324 21:09:26.748530 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0324 21:09:26.750277 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0324 21:09:26.750687 1 reflector.go:153] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0324 21:09:26.752688 1 reflector.go:153] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
I0324 21:09:27.814684 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0324 21:09:27.817441 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0324 21:09:27.841940 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
E0324 21:09:35.532609 1 factory.go:494] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Tue 2020-03-24 21:08:21 UTC, end at Thu 2020-03-26 13:08:13 UTC. --
Mar 25 20:39:23 minikube kubelet[3599]: I0325 20:39:23.099907 3599 reconciler.go:303] Volume detached for volume "kubernetes-dashboard-certs" (UniqueName: "kubernetes.io/secret/b0e79a88-eb9e-4c83-af01-dafd35de3d1c-kubernetes-dashboard-certs") on node "m01" DevicePath ""
Mar 25 20:39:23 minikube kubelet[3599]: I0325 20:39:23.100636 3599 reconciler.go:303] Volume detached for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/b0e79a88-eb9e-4c83-af01-dafd35de3d1c-kubernetes-dashboard-token-flrvs") on node "m01" DevicePath ""
Mar 26 12:34:15 minikube kubelet[3599]: I0326 12:34:15.220563 3599 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-certs" (UniqueName: "kubernetes.io/secret/1e873a0b-e481-46d7-8e1b-c96f135b759b-kubernetes-dashboard-certs") pod "kubernetes-dashboard-5996555fd8-4fl6v" (UID: "1e873a0b-e481-46d7-8e1b-c96f135b759b")
Mar 26 12:34:15 minikube kubelet[3599]: I0326 12:34:15.226879 3599 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1e873a0b-e481-46d7-8e1b-c96f135b759b-tmp-volume") pod "kubernetes-dashboard-5996555fd8-4fl6v" (UID: "1e873a0b-e481-46d7-8e1b-c96f135b759b")
Mar 26 12:34:15 minikube kubelet[3599]: I0326 12:34:15.227426 3599 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/1e873a0b-e481-46d7-8e1b-c96f135b759b-kubernetes-dashboard-token-flrvs") pod "kubernetes-dashboard-5996555fd8-4fl6v" (UID: "1e873a0b-e481-46d7-8e1b-c96f135b759b")
Mar 26 12:34:15 minikube kubelet[3599]: I0326 12:34:15.564508 3599 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/1ecc7786-75ca-4ed3-857e-9306f43cd393-kubernetes-dashboard-token-flrvs") pod "dashboard-metrics-scraper-76585494d8-z9mvr" (UID: "1ecc7786-75ca-4ed3-857e-9306f43cd393")
Mar 26 12:34:15 minikube kubelet[3599]: I0326 12:34:15.565715 3599 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1ecc7786-75ca-4ed3-857e-9306f43cd393-tmp-volume") pod "dashboard-metrics-scraper-76585494d8-z9mvr" (UID: "1ecc7786-75ca-4ed3-857e-9306f43cd393")
Mar 26 12:34:16 minikube kubelet[3599]: W0326 12:34:16.523828 3599 pod_container_deletor.go:75] Container "7b7078e38615f92ea7433aeefde2859f4d18f17a6ee0fa3013cc7bb8ff6257dd" not found in pod's containers
Mar 26 12:34:16 minikube kubelet[3599]: W0326 12:34:16.538802 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-5996555fd8-4fl6v through plugin: invalid network status for
Mar 26 12:34:16 minikube kubelet[3599]: W0326 12:34:16.926252 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-z9mvr through plugin: invalid network status for
Mar 26 12:34:17 minikube kubelet[3599]: W0326 12:34:17.620875 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-5996555fd8-4fl6v through plugin: invalid network status for
Mar 26 12:34:17 minikube kubelet[3599]: W0326 12:34:17.636966 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-z9mvr through plugin: invalid network status for
Mar 26 12:34:18 minikube kubelet[3599]: W0326 12:34:18.766973 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-5996555fd8-4fl6v through plugin: invalid network status for
Mar 26 12:34:18 minikube kubelet[3599]: W0326 12:34:18.798918 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-76585494d8-z9mvr through plugin: invalid network status for
Mar 26 12:34:21 minikube kubelet[3599]: I0326 12:34:21.235892 3599 reconciler.go:183] operationExecutor.UnmountVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46-tmp-volume") pod "ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46" (UID: "ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46")
Mar 26 12:34:21 minikube kubelet[3599]: I0326 12:34:21.236038 3599 reconciler.go:183] operationExecutor.UnmountVolume started for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/4f56b435-50ac-4b28-aa4b-0a07e9a1ec95-kubernetes-dashboard-token-flrvs") pod "4f56b435-50ac-4b28-aa4b-0a07e9a1ec95" (UID: "4f56b435-50ac-4b28-aa4b-0a07e9a1ec95")
Mar 26 12:34:21 minikube kubelet[3599]: I0326 12:34:21.237613 3599 reconciler.go:183] operationExecutor.UnmountVolume started for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46-kubernetes-dashboard-token-flrvs") pod "ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46" (UID: "ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46")
Mar 26 12:34:21 minikube kubelet[3599]: W0326 12:34:21.238494 3599 empty_dir.go:418] Warning: Failed to clear quota on /var/lib/kubelet/pods/ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46/volumes/kubernetes.ioempty-dir/tmp-volume: ClearQuota called, but quotas disabled
Mar 26 12:34:21 minikube kubelet[3599]: I0326 12:34:21.243130 3599 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46-tmp-volume" (OuterVolumeSpecName: "tmp-volume") pod "ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46" (UID: "ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46"). InnerVolumeSpecName "tmp-volume". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
Mar 26 12:34:21 minikube kubelet[3599]: I0326 12:34:21.249035 3599 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46-kubernetes-dashboard-token-flrvs" (OuterVolumeSpecName: "kubernetes-dashboard-token-flrvs") pod "ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46" (UID: "ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46"). InnerVolumeSpecName "kubernetes-dashboard-token-flrvs". PluginName "kubernetes.io/secret", VolumeGidValue ""
Mar 26 12:34:21 minikube kubelet[3599]: I0326 12:34:21.283041 3599 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f56b435-50ac-4b28-aa4b-0a07e9a1ec95-kubernetes-dashboard-token-flrvs" (OuterVolumeSpecName: "kubernetes-dashboard-token-flrvs") pod "4f56b435-50ac-4b28-aa4b-0a07e9a1ec95" (UID: "4f56b435-50ac-4b28-aa4b-0a07e9a1ec95"). InnerVolumeSpecName "kubernetes-dashboard-token-flrvs". PluginName "kubernetes.io/secret", VolumeGidValue ""
Mar 26 12:34:21 minikube kubelet[3599]: I0326 12:34:21.339421 3599 reconciler.go:303] Volume detached for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46-tmp-volume") on node "m01" DevicePath ""
Mar 26 12:34:21 minikube kubelet[3599]: I0326 12:34:21.339965 3599 reconciler.go:303] Volume detached for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/4f56b435-50ac-4b28-aa4b-0a07e9a1ec95-kubernetes-dashboard-token-flrvs") on node "m01" DevicePath ""
Mar 26 12:34:21 minikube kubelet[3599]: I0326 12:34:21.340448 3599 reconciler.go:303] Volume detached for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/ecd78ecd-a90d-49b7-8fb3-5a1f31c8cc46-kubernetes-dashboard-token-flrvs") on node "m01" DevicePath ""
Mar 26 12:34:23 minikube kubelet[3599]: I0326 12:34:23.270581 3599 reconciler.go:183] operationExecutor.UnmountVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/4f56b435-50ac-4b28-aa4b-0a07e9a1ec95-tmp-volume") pod "4f56b435-50ac-4b28-aa4b-0a07e9a1ec95" (UID: "4f56b435-50ac-4b28-aa4b-0a07e9a1ec95")
Mar 26 12:34:23 minikube kubelet[3599]: W0326 12:34:23.270885 3599 empty_dir.go:418] Warning: Failed to clear quota on /var/lib/kubelet/pods/4f56b435-50ac-4b28-aa4b-0a07e9a1ec95/volumes/kubernetes.io
empty-dir/tmp-volume: ClearQuota called, but quotas disabled
Mar 26 12:34:23 minikube kubelet[3599]: I0326 12:34:23.271118 3599 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f56b435-50ac-4b28-aa4b-0a07e9a1ec95-tmp-volume" (OuterVolumeSpecName: "tmp-volume") pod "4f56b435-50ac-4b28-aa4b-0a07e9a1ec95" (UID: "4f56b435-50ac-4b28-aa4b-0a07e9a1ec95"). InnerVolumeSpecName "tmp-volume". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
Mar 26 12:34:23 minikube kubelet[3599]: I0326 12:34:23.371637 3599 reconciler.go:303] Volume detached for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/4f56b435-50ac-4b28-aa4b-0a07e9a1ec95-tmp-volume") on node "m01" DevicePath ""
Mar 26 12:34:31 minikube kubelet[3599]: I0326 12:34:31.392994 3599 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/ca9c1c74-c449-45d0-9c8e-24e4620b02c9-tmp-volume") pod "dashboard-metrics-scraper-7b64584c5c-grs4z" (UID: "ca9c1c74-c449-45d0-9c8e-24e4620b02c9")
Mar 26 12:34:31 minikube kubelet[3599]: I0326 12:34:31.393223 3599 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/ca9c1c74-c449-45d0-9c8e-24e4620b02c9-kubernetes-dashboard-token-flrvs") pod "dashboard-metrics-scraper-7b64584c5c-grs4z" (UID: "ca9c1c74-c449-45d0-9c8e-24e4620b02c9")
Mar 26 12:34:31 minikube kubelet[3599]: I0326 12:34:31.496752 3599 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/ab6d2a1c-a7b6-4dd2-8215-bed805c48c73-tmp-volume") pod "kubernetes-dashboard-79d9cd965-cnhxd" (UID: "ab6d2a1c-a7b6-4dd2-8215-bed805c48c73")
Mar 26 12:34:31 minikube kubelet[3599]: I0326 12:34:31.497014 3599 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/ab6d2a1c-a7b6-4dd2-8215-bed805c48c73-kubernetes-dashboard-token-flrvs") pod "kubernetes-dashboard-79d9cd965-cnhxd" (UID: "ab6d2a1c-a7b6-4dd2-8215-bed805c48c73")
Mar 26 12:34:31 minikube kubelet[3599]: W0326 12:34:31.601945 3599 watcher.go:87] Error while processing event ("/sys/fs/cgroup/pids/system.slice/run-r8c71d71247b7459693ce7e581a2dbfb7.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/pids/system.slice/run-r8c71d71247b7459693ce7e581a2dbfb7.scope: no such file or directory
Mar 26 12:34:32 minikube kubelet[3599]: W0326 12:34:32.639050 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-cnhxd through plugin: invalid network status for
Mar 26 12:34:33 minikube kubelet[3599]: W0326 12:34:33.123962 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-grs4z through plugin: invalid network status for
Mar 26 12:34:33 minikube kubelet[3599]: W0326 12:34:33.124913 3599 pod_container_deletor.go:75] Container "0d8885f8cd4633a8bdb5e09cfbe97ed27f39eabc769e2153a2c458a95e87e12b" not found in pod's containers
Mar 26 12:34:33 minikube kubelet[3599]: W0326 12:34:33.141850 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-cnhxd through plugin: invalid network status for
Mar 26 12:34:33 minikube kubelet[3599]: W0326 12:34:33.251078 3599 pod_container_deletor.go:75] Container "ee9588f5f64a0ba56609c3895a2775ad2683390e42bf3bbc0366b6376d1d7343" not found in pod's containers
Mar 26 12:34:34 minikube kubelet[3599]: W0326 12:34:34.276796 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-cnhxd through plugin: invalid network status for
Mar 26 12:34:34 minikube kubelet[3599]: W0326 12:34:34.329138 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-grs4z through plugin: invalid network status for
Mar 26 12:34:34 minikube kubelet[3599]: I0326 12:34:34.840072 3599 reconciler.go:183] operationExecutor.UnmountVolume started for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/1e873a0b-e481-46d7-8e1b-c96f135b759b-kubernetes-dashboard-token-flrvs") pod "1e873a0b-e481-46d7-8e1b-c96f135b759b" (UID: "1e873a0b-e481-46d7-8e1b-c96f135b759b")
Mar 26 12:34:34 minikube kubelet[3599]: I0326 12:34:34.841564 3599 reconciler.go:183] operationExecutor.UnmountVolume started for volume "kubernetes-dashboard-certs" (UniqueName: "kubernetes.io/secret/1e873a0b-e481-46d7-8e1b-c96f135b759b-kubernetes-dashboard-certs") pod "1e873a0b-e481-46d7-8e1b-c96f135b759b" (UID: "1e873a0b-e481-46d7-8e1b-c96f135b759b")
Mar 26 12:34:34 minikube kubelet[3599]: I0326 12:34:34.844021 3599 reconciler.go:183] operationExecutor.UnmountVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1e873a0b-e481-46d7-8e1b-c96f135b759b-tmp-volume") pod "1e873a0b-e481-46d7-8e1b-c96f135b759b" (UID: "1e873a0b-e481-46d7-8e1b-c96f135b759b")
Mar 26 12:34:34 minikube kubelet[3599]: W0326 12:34:34.844415 3599 empty_dir.go:418] Warning: Failed to clear quota on /var/lib/kubelet/pods/1e873a0b-e481-46d7-8e1b-c96f135b759b/volumes/kubernetes.ioempty-dir/tmp-volume: ClearQuota called, but quotas disabled
Mar 26 12:34:34 minikube kubelet[3599]: I0326 12:34:34.845972 3599 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e873a0b-e481-46d7-8e1b-c96f135b759b-tmp-volume" (OuterVolumeSpecName: "tmp-volume") pod "1e873a0b-e481-46d7-8e1b-c96f135b759b" (UID: "1e873a0b-e481-46d7-8e1b-c96f135b759b"). InnerVolumeSpecName "tmp-volume". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
Mar 26 12:34:34 minikube kubelet[3599]: I0326 12:34:34.852148 3599 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e873a0b-e481-46d7-8e1b-c96f135b759b-kubernetes-dashboard-certs" (OuterVolumeSpecName: "kubernetes-dashboard-certs") pod "1e873a0b-e481-46d7-8e1b-c96f135b759b" (UID: "1e873a0b-e481-46d7-8e1b-c96f135b759b"). InnerVolumeSpecName "kubernetes-dashboard-certs". PluginName "kubernetes.io/secret", VolumeGidValue ""
Mar 26 12:34:34 minikube kubelet[3599]: I0326 12:34:34.859061 3599 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e873a0b-e481-46d7-8e1b-c96f135b759b-kubernetes-dashboard-token-flrvs" (OuterVolumeSpecName: "kubernetes-dashboard-token-flrvs") pod "1e873a0b-e481-46d7-8e1b-c96f135b759b" (UID: "1e873a0b-e481-46d7-8e1b-c96f135b759b"). InnerVolumeSpecName "kubernetes-dashboard-token-flrvs". PluginName "kubernetes.io/secret", VolumeGidValue ""
Mar 26 12:34:34 minikube kubelet[3599]: I0326 12:34:34.946123 3599 reconciler.go:303] Volume detached for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/1e873a0b-e481-46d7-8e1b-c96f135b759b-kubernetes-dashboard-token-flrvs") on node "m01" DevicePath ""
Mar 26 12:34:34 minikube kubelet[3599]: I0326 12:34:34.946583 3599 reconciler.go:303] Volume detached for volume "kubernetes-dashboard-certs" (UniqueName: "kubernetes.io/secret/1e873a0b-e481-46d7-8e1b-c96f135b759b-kubernetes-dashboard-certs") on node "m01" DevicePath ""
Mar 26 12:34:34 minikube kubelet[3599]: I0326 12:34:34.946686 3599 reconciler.go:303] Volume detached for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1e873a0b-e481-46d7-8e1b-c96f135b759b-tmp-volume") on node "m01" DevicePath ""
Mar 26 12:34:35 minikube kubelet[3599]: W0326 12:34:35.668584 3599 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-grs4z through plugin: invalid network status for
Mar 26 12:34:36 minikube kubelet[3599]: E0326 12:34:36.777020 3599 remote_runtime.go:295] ContainerStatus "4ed46655f273b36a05dfdee96d1188189fca2818385d478e28ed3d049191fc9a" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 4ed46655f273b36a05dfdee96d1188189fca2818385d478e28ed3d049191fc9a
Mar 26 12:34:36 minikube kubelet[3599]: I0326 12:34:36.876591 3599 reconciler.go:183] operationExecutor.UnmountVolume started for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/1ecc7786-75ca-4ed3-857e-9306f43cd393-kubernetes-dashboard-token-flrvs") pod "1ecc7786-75ca-4ed3-857e-9306f43cd393" (UID: "1ecc7786-75ca-4ed3-857e-9306f43cd393")
Mar 26 12:34:36 minikube kubelet[3599]: I0326 12:34:36.876886 3599 reconciler.go:183] operationExecutor.UnmountVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1ecc7786-75ca-4ed3-857e-9306f43cd393-tmp-volume") pod "1ecc7786-75ca-4ed3-857e-9306f43cd393" (UID: "1ecc7786-75ca-4ed3-857e-9306f43cd393")
Mar 26 12:34:36 minikube kubelet[3599]: W0326 12:34:36.878172 3599 empty_dir.go:418] Warning: Failed to clear quota on /var/lib/kubelet/pods/1ecc7786-75ca-4ed3-857e-9306f43cd393/volumes/kubernetes.io
empty-dir/tmp-volume: ClearQuota called, but quotas disabled
Mar 26 12:34:36 minikube kubelet[3599]: I0326 12:34:36.879621 3599 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ecc7786-75ca-4ed3-857e-9306f43cd393-tmp-volume" (OuterVolumeSpecName: "tmp-volume") pod "1ecc7786-75ca-4ed3-857e-9306f43cd393" (UID: "1ecc7786-75ca-4ed3-857e-9306f43cd393"). InnerVolumeSpecName "tmp-volume". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
Mar 26 12:34:36 minikube kubelet[3599]: I0326 12:34:36.890043 3599 operation_generator.go:713] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ecc7786-75ca-4ed3-857e-9306f43cd393-kubernetes-dashboard-token-flrvs" (OuterVolumeSpecName: "kubernetes-dashboard-token-flrvs") pod "1ecc7786-75ca-4ed3-857e-9306f43cd393" (UID: "1ecc7786-75ca-4ed3-857e-9306f43cd393"). InnerVolumeSpecName "kubernetes-dashboard-token-flrvs". PluginName "kubernetes.io/secret", VolumeGidValue ""
Mar 26 12:34:36 minikube kubelet[3599]: I0326 12:34:36.984682 3599 reconciler.go:303] Volume detached for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/1ecc7786-75ca-4ed3-857e-9306f43cd393-tmp-volume") on node "m01" DevicePath ""
Mar 26 12:34:36 minikube kubelet[3599]: I0326 12:34:36.985785 3599 reconciler.go:303] Volume detached for volume "kubernetes-dashboard-token-flrvs" (UniqueName: "kubernetes.io/secret/1ecc7786-75ca-4ed3-857e-9306f43cd393-kubernetes-dashboard-token-flrvs") on node "m01" DevicePath ""
Mar 26 12:34:37 minikube kubelet[3599]: E0326 12:34:37.158514 3599 kubelet_pods.go:1105] Failed killing the pod "dashboard-metrics-scraper-76585494d8-z9mvr": failed to "KillContainer" for "dashboard-metrics-scraper" with KillContainerError: "rpc error: code = Unknown desc = Error: No such container: 4ed46655f273b36a05dfdee96d1188189fca2818385d478e28ed3d049191fc9a"

==> kubernetes-dashboard [537280ecbd88] <==
2020/03/26 12:34:33 Starting overwatch
2020/03/26 12:34:33 Using namespace: kubernetes-dashboard
2020/03/26 12:34:33 Using in-cluster config to connect to apiserver
2020/03/26 12:34:33 Using secret token for csrf signing
2020/03/26 12:34:33 Initializing csrf token from kubernetes-dashboard-csrf secret
2020/03/26 12:34:33 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2020/03/26 12:34:33 Successful initial request to the apiserver, version: v1.17.3
2020/03/26 12:34:33 Generating JWE encryption key
2020/03/26 12:34:33 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2020/03/26 12:34:33 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2020/03/26 12:34:34 Initializing JWE encryption key from synchronized object
2020/03/26 12:34:34 Creating in-cluster Sidecar client
2020/03/26 12:34:34 Successful request to sidecar
2020/03/26 12:34:34 Serving insecurely on HTTP port: 9090

==> storage-provisioner [4c365562a8fd] <==

The operating system version:
MacOS Catalina 10.15.4

@TroubleConsultant
Copy link
Author

Note: paths in the sudo command don't appear to be correct. There is no "/var/lib/minikube" so no kubectl can be on that path. Minikube is installed at /usr/local/bin. I tried to track down where the paths were configured for the callback, but failed.

@tstromberg tstromberg changed the title minikube enable dashboard fails - paths incorrect in callbacks dashboard: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"cluster-admin"}: cannot change roleRef Mar 26, 2020
@tstromberg
Copy link
Contributor

tstromberg commented Mar 26, 2020

The paths are within the VM or container, so they should be OK.

What minikube start options did you use? I'm curious about the RBAC error.

@tstromberg tstromberg added co/dashboard dashboard related issues kind/support Categorizes issue or PR as a support question. labels Mar 26, 2020
@TroubleConsultant
Copy link
Author

just "minikube start" with no particular options. Everything else seems to work, I've been able to apply deployments and access them. Just the desktop seems off.

I re-ran start, and the output is:
😄 minikube v1.8.2 on Darwin 10.15.4
▪ MINIKUBE_ACTIVE_DOCKERD=minikube
✨ Using the hyperkit driver based on existing profile
⌛ Reconfiguring existing host ...
🏃 Using the running hyperkit "minikube" VM ...
🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
🚀 Launching Kubernetes ...
🌟 Enabling addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"

a list of pods (showing the dashboard pods are up)
NAMESPACE NAME READY STATUS RESTARTS AGE
default web1-74c74989b5-cdb84 1/1 Running 1 25h
kube-system coredns-6955765f44-bv2sd 1/1 Running 1 43h
kube-system coredns-6955765f44-nh9m9 1/1 Running 1 43h
kube-system etcd-m01 1/1 Running 2 43h
kube-system kube-apiserver-m01 1/1 Running 2 43h
kube-system kube-controller-manager-m01 1/1 Running 1 43h
kube-system kube-proxy-ndqn6 1/1 Running 1 43h
kube-system kube-scheduler-m01 1/1 Running 2 43h
kube-system storage-provisioner 1/1 Running 1 43h
kubernetes-dashboard dashboard-metrics-scraper-7b64584c5c-grs4z 1/1 Running 1 4h14m
kubernetes-dashboard kubernetes-dashboard-79d9cd965-cnhxd 1/1 Running 1 4h14m

@tstromberg
Copy link
Contributor

tstromberg commented Mar 30, 2020

Here's a theory: the cluster has a role binding configured from a previous release of Kubernetes or at least a previous release of the Kubernetes dashboard. I base this on: https://stackoverflow.com/questions/59020654/the-clusterrolebinding-kubernetes-dashboard-is-invalid-roleref-invalid-valu

Can someone who is running into this problem do an experiment for me, and share the output of the following commands? I'm confident one of them will fix the problem, but am unsure as to which:

  • kubectl get clusterrolebinding kubernetes-dashboard
  • minikube addons disable dashboard
  • kubectl get clusterrolebinding kubernetes-dashboard
  • minikube dashboard
  • kubectl get clusterrolebinding kubernetes-dashboard
  • kubectl delete clusterrolebinding kubernetes-dashboard
  • minikube dashboard
  • minikube addons disable dashboard
  • minikube dashboard
  • kubectl get clusterrolebinding kubernetes-dashboard

Thank you!

@tstromberg tstromberg changed the title dashboard: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"cluster-admin"}: cannot change roleRef dashboard: The ClusterRoleBinding "kubernetes-dashboard" is invalid: cannot change roleRef Mar 30, 2020
@tstromberg tstromberg added the triage/needs-information Indicates an issue needs more information in order to work on it. label Mar 30, 2020
@tstromberg
Copy link
Contributor

@judimator - please let me know if you are able to perform the experiment steps above.

Worst case, this problem can be overcome by minikube delete, but I'd like to know which one works so that we can prevent the problem from occurring in the code base.

@judimator
Copy link

@judimator - please let me know if you are able to perform the experiment steps above.

Worst case, this problem can be overcome by minikube delete, but I'd like to know which one works so that we can prevent the problem from occurring in the code base.

Hi @tstromberg. I will do the experiment but a little bit later(in a 7-8 hours).
Many thanks!

@TroubleConsultant
Copy link
Author

@tstromberg I executed the provided steps in order, and after the delete of the cluster role binding and disable of the dashboard, I was able to access the dashboard. Attached is the output as requested:

Keiths-Air:~ keithmcmillan$ minikube start
🎉 minikube 1.9.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.9.0
💡 To disable this notice, run: 'minikube config set WantUpdateNotification false'

🙄 minikube v1.8.2 on Darwin 10.15.4
✨ Using the hyperkit driver based on existing profile
⌛ Reconfiguring existing host ...
🔄 Starting existing hyperkit VM for "minikube" ...
🐳 Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
🚀 Launching Kubernetes ...
🌟 Enabling addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
Keiths-Air:~ keithmcmillan$ kubectl get clusterrolebinding kubernetes-dashboard
NAME AGE
kubernetes-dashboard 4d20h
Keiths-Air:~ keithmcmillan$ minikube addons disable dashboard
🌑 "The 'dashboard' addon is disabled
Keiths-Air:~ keithmcmillan$ kubectl get clusterrolebinding kubernetes-dashboard
NAME AGE
kubernetes-dashboard 4d20h
Keiths-Air:~ keithmcmillan$ minikube dashboard
🔌 Enabling dashboard ...

💣 Unable to enable dashboard: running callbacks: [addon apply: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
namespace/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf configured
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged

stderr:
The ClusterRoleBinding "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"cluster-admin"}: cannot change roleRef
]

😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
Keiths-Air:~ keithmcmillan$ kubectl get clusterrolebinding kubernetes-dashboard
NAME AGE
kubernetes-dashboard 4d20h
Keiths-Air:~ keithmcmillan$ kubectl delete clusterrolebinding kubernetes-dashboard
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
Keiths-Air:~ keithmcmillan$ minikube addons disable dashboard
🌑 "The 'dashboard' addon is disabled
Keiths-Air:~ keithmcmillan$ minikube dashboard
🔌 Enabling dashboard ...
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ...
🎉 Opening http://127.0.0.1:56102/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

At this point, the dashboard was accessible

@judimator
Copy link

@tstromberg I executed the provided steps in order, and after the delete of the cluster role binding and disable of the dashboard, I was able to access the dashboard. Attached is the output as requested:

Keiths-Air:~ keithmcmillan$ minikube start
tada minikube 1.9.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.9.0
bulb To disable this notice, run: 'minikube config set WantUpdateNotification false'
roll_eyes minikube v1.8.2 on Darwin 10.15.4
sparkles Using the hyperkit driver based on existing profile
⌛ Reconfiguring existing host ...
arrows_counterclockwise Starting existing hyperkit VM for "minikube" ...
whale Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
rocket Launching Kubernetes ...
star2 Enabling addons: default-storageclass, storage-provisioner
surfing_man Done! kubectl is now configured to use "minikube"
Keiths-Air:~ keithmcmillan$ kubectl get clusterrolebinding kubernetes-dashboard
NAME AGE
kubernetes-dashboard 4d20h
Keiths-Air:~ keithmcmillan$ minikube addons disable dashboard
new_moon "The 'dashboard' addon is disabled
Keiths-Air:~ keithmcmillan$ kubectl get clusterrolebinding kubernetes-dashboard
NAME AGE
kubernetes-dashboard 4d20h
Keiths-Air:~ keithmcmillan$ minikube dashboard
electric_plug Enabling dashboard ...
bomb Unable to enable dashboard: running callbacks: [addon apply: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.17.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
stdout:
namespace/kubernetes-dashboard unchanged
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf configured
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
stderr:
The ClusterRoleBinding "kubernetes-dashboard" is invalid: roleRef: Invalid value: rbac.RoleRef{APIGroup:"rbac.authorization.k8s.io", Kind:"ClusterRole", Name:"cluster-admin"}: cannot change roleRef
]
crying_cat_face minikube is exiting due to an error. If the above message is not useful, open an issue:
point_right https://github.com/kubernetes/minikube/issues/new/choose
Keiths-Air:~ keithmcmillan$ kubectl get clusterrolebinding kubernetes-dashboard
NAME AGE
kubernetes-dashboard 4d20h
Keiths-Air:~ keithmcmillan$ kubectl delete clusterrolebinding kubernetes-dashboard
clusterrolebinding.rbac.authorization.k8s.io "kubernetes-dashboard" deleted
Keiths-Air:~ keithmcmillan$ minikube addons disable dashboard
new_moon "The 'dashboard' addon is disabled
Keiths-Air:~ keithmcmillan$ minikube dashboard
electric_plug Enabling dashboard ...
thinking Verifying dashboard health ...
rocket Launching proxy ...
thinking Verifying proxy health ...
tada Opening http://127.0.0.1:56102/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ in your default browser...

At this point, the dashboard was accessible

Much appreciated. It really helped! Thanks!

@tstromberg
Copy link
Contributor

Thank you for the confirmation. To fix this properly, we'll need to fix the dashboard deployment to either reconcile or delete the existing clusterrolebinding.

Help wanted!

@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. and removed triage/needs-information Indicates an issue needs more information in order to work on it. kind/support Categorizes issue or PR as a support question. labels Mar 30, 2020
@tstromberg
Copy link
Contributor

If someone is interested in chasing this down. I'd first see if there is anything that can be done to reliably reconcile a role binding here:

https://github.com/kubernetes/minikube/blob/master/deploy/addons/dashboard/dashboard-clusterrolebinding.yaml

If not, here's an example place where one could introduce a specific hack to query the apiserver for a role binding and delete it before enablement:

@TroubleConsultant
Copy link
Author

Hi @tstromberg , I'm finishing up with a client today, and rolling off. I will try to take a look in a couple of days, if nobody else has gotten to it first.
thanks again for your help fixing this!

@tstromberg tstromberg added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Mar 31, 2020
@tstromberg tstromberg added the needs-solution-message Issues where where offering a solution for an error would be helpful label Apr 22, 2020
@tstromberg
Copy link
Contributor

Closing this as stale - but leaving a label to make sure we point users back to this issue if they run into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/dashboard dashboard related issues good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

3 participants