Provisioner stops processing PVC's #1019
Comments
Could you please check your api server? It seems that api server is not working. |
If i redeploy the provisioner then it starts working. so my query is if API server is not working then how redeploying provisioner works. |
what version of the library are you using? It is using shared informers now and not doing anything special with its watches. |
I am using the controller structure from external storage and following libraries for kubernetes. k8s.io/client-go = version v6.0.0 And i am getting above logs. Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:serviceaccount:default:" cannot list persistentvolumes at the cluster scope: User "system:serviceaccount:default:" cannot list all persistentvolumes in the cluster. So first it is showing connection refused and after that it is not able to list the resources in cluster and at end of log it is priniting |
Could you please make sue the service account the provisioner is using have access to list all of persistentvolumes? Please refer to docs |
yes, it is an RBAC issue. refer also to https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/rbac.yaml#L1 for an example of a ClusterRole. All provisioners using the library need at a minimum the RBAC permissions listed there: read/write PVs, read PVCs, read StorageClasses, write events. /area lib |
Thanks I checked RBAC yaml and i dont have following role and role binding in my rbac.yaml, i added it now. kind: Role
kind: RoleBinding
|
May i get help on this ?? |
try the latest library version, e.g. https://github.com/kubernetes-incubator/external-storage/releases/tag/v5.2.0 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I am using the concept of external-storage.
I am running a provisioner which is processing the PVC's and binding them to PV's.
But after some time it stops processing the PVC's and PVC remains in pending state which looks like that it stops working , restarting provisioner solves this issue but i dont want to restart provisioner every time.
Few logs statement which looks suspicious are as follows.
I1003 15:35:12.426810 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
I1003 15:35:12.427387 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
I1003 15:35:12.427632 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
I1003 15:35:12.427771 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
I1003 15:35:12.427909 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
Failed to watch *v1.Node: Get /api/v1/nodes?resourceVersion=98676317&timeoutSeconds=354&watch=true: dial tcp 172.56.0.1:443: getsockopt: connection refused
Failed to watch *v1.PersistentVolume: Get /api/v1/persistentvolumes?resourceVersion=98666055&timeoutSeconds=428&watch=true: dial tcp 172.56.0.1:443: getsockopt: connection refused
I1003 15:35:12.428047 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
I1003 15:35:12.428057 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
I1003 15:35:12.428180 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
I1003 15:35:12.428287 1 streamwatcher.go:103] Unexpected EOF during watch stream event decoding: unexpected EOF
Failed to watch *v1.StorageClass: Get /apis/storage.k8s.io/v1/storageclasses?resourceVersion=98658274&timeoutSeconds=569&watch=true: dial tcp 172.56.0.1:443: getsockopt: connection refused
watch of *v1.PersistentVolume ended with: The resourceVersion for the provided watch is too old.
watch of *v1.StorageClass ended with: The resourceVersion for the provided watch is too old.
watch of *v1.PersistentVolumeClaim ended with: The resourceVersion for the provided watch is too old.
watch of *v1.ConfigMap ended with: The resourceVersion for the provided watch is too old.
watch of *v1.PersistentVolume ended with: The resourceVersion for the provided watch is too old.
The text was updated successfully, but these errors were encountered: