You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After running cluster.yml, the local path storage (rancher implementation) is in error state.
I get exactly the same behavior on my 4 clusters, so it should be easy to reproduce.
$ kubectl get pod -n local-path-storage --context dev
NAME READY STATUS RESTARTS AGE
local-path-provisioner-66df45bfdd-m2v5v 0/1 CrashLoopBackOff 272 22h
$ kubectl logs deployment.apps/local-path-provisioner -n local-path-storage --context dev
time="2021-02-24T10:39:15Z" level=fatal msg="Error starting daemon: invalid empty flag helper-pod-file and it also does not exist at ConfigMap local-path-storage/local-path-config with err: configmaps \"local-path-config\" is forbidden: User \"system:serviceaccount:local-path-storage:local-path-provisioner-service-account\" cannot get resource \"configmaps\" in API group \"\" in the namespace \"local-path-storage\""
$ kubectl get cm,role,rolebinding,sa -n local-path-storage --context dev
NAME DATA AGE
configmap/kube-root-ca.crt 1 21d
configmap/local-path-config 4 309d
NAME ROLE AGE
rolebinding.rbac.authorization.k8s.io/psp:local-path-provisioner ClusterRole/psp:local-path-provisioner 309d
NAME SECRETS AGE
serviceaccount/default 1 309d
serviceaccount/local-path-provisioner-service-account 1 309d
$ kubectl apply -f role-local-path.yaml -n local-path-storage --context dev
restart the deployment
$ kubectl rollout restart deployment.apps/local-path-provisioner -n local-path-storage --context dev
$ kubectl get pod -n local-path-storage --context dev
NAME READY STATUS RESTARTS AGE
pod/local-path-provisioner-7fdcf54b7d-6vwbd 0/1 Terminating 38 172m
pod/local-path-provisioner-855db6667b-lnqlz 1/1 Running 0 3s
repeat apply+restart on all clusters
additional info
$ git rev-parse HEAD
125148e7a5d75d7f61f8f2f93210ee0a2cddd1b4
$ kubectl get deploy -o wide
NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
local-path-provisioner 1/1 1 1 309d local-path-provisioner docker.io/rancher/local-path-provisioner:v0.0.19 app=local-path-provisioner
$ kubectl get node --context dev
NAME STATUS ROLES AGE VERSION
frd3kq-k8s01g Ready worker 308d v1.20.4
frd3kq-k8s02g Ready worker 294d v1.20.4
frd3kq-k8s03 Ready worker 309d v1.20.4
frd3kq-k8s04 Ready worker 309d v1.20.4
kube-dev-master1 Ready control-plane,master 309d v1.20.4
kube-dev-master2 Ready control-plane,master 309d v1.20.4
kube-dev-master3 Ready control-plane,master 309d v1.20.4
The text was updated successfully, but these errors were encountered:
After running cluster.yml, the local path storage (rancher implementation) is in error state.
I get exactly the same behavior on my 4 clusters, so it should be easy to reproduce.
My variables settings in inventory :
Workaround
role-local-path.yaml
additional info
The text was updated successfully, but these errors were encountered: