-
Notifications
You must be signed in to change notification settings - Fork 1.6k
[nfs-client-provisioner]PVC pending state #754
Comments
Hi, mee too i'm stuck here, the pvc is pending state. I try both quick start and the normal mode |
same issue to me, i also deploy openebs provisioner, and it work well |
Is that the full log of the provisioner? Can you provide it if not /area nfs-client |
I have the same issue. The log (-v 4) shows nothing interesting: I0625 13:00:02.214122 1 controller.go:492] Starting provisioner controller ac4f591d-7877-11e8-8e50-0a58c0a80305! |
Hello - I have the same issue (PVC stuck in Pending state) - but I'm using the EFS provisioner. Logs (for the pod) do not show much:
How can I get further information to help debugging this issue? One other thing: I;m running on Amazon EKS (quite new) and had to install nfs-utils on the nodes as it was not installed by default. Perhaps the issue is related and there is some other missing software there? Kind regards, Pete |
Strangely it's now working as expected, things I changed:
The PVCs now bind properly and are not stuck in the Pending state - perhaps it's something to do with the container not liking starting out in a different serviceAccount and then being subsequently patched? ... |
If it were a serviceAccount misconfiguration I would expect to see many permission denied errors in the log. So I am still perplexed! |
stuck here too. @mostlyAtNight . |
stuck for me too. Describe log for PVC: # kubectl describe pvc efs
Name: efs
Namespace: default
StorageClass: aws-efs
Status: Pending
Volume:
Labels: <none>
Annotations: volume.beta.kubernetes.io/storage-class=aws-efs
volume.beta.kubernetes.io/storage-provisioner=example.com/aws-efs
Finalizers: []
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalProvisioning 1m (x62 over 16m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/aws-efs" or manually created by system administrator Looks like the functionality from Documentation: |
We had the same issue with efs and eks using the aws eks ami for the nodes, we solved it by using |
I have the same pvc pending issue Name: efs Normal ExternalProvisioning 1m (x61 over 16m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/aws-efs" or manually created by system administrator |
I came across the same issue using KOPS. I have followed whatever was specified in the docs and also used the following as a reference https://medium.com/@while1eq1/using-amazon-efs-in-a-multiaz-kubernetes-setup-57922e032776 but my pod file look different with an nginx instance trying to mount to the EFS cluster
here is a sample nginx yaml which i used to test
|
Can you confirm the provisioner log looks the same as above (i.e. it is basically empty) #754 (comment) ? What if you try a newer/older image? https://quay.io/repository/external_storage/efs-provisioner?tab=tags (What version are you using now?) |
I resolved the problem by creating PV volume before PVC. |
I tried the same to create a PV before a PVC, did not work. Not sure if it has anything to do with the order. I suspect it could be something with KOPS but it shouldnt matter. I have ensured that the workers nodes and the master nodes have nfs-commons installed on them. |
For those getting
If you're using rbac (which it seems kops does by default), make sure you've created a serviceAccount. Look at the deployment.yaml rather than the manifest.yaml. You'll also need rbac.yaml. The readme describes this but doesn't mention the service account. |
Sigh. This took me all day to figure out. I hope this helps someone save some time: The issue, for me, was the default service account. It didn't have access to get endpoints. Mind you, this is a fresh EKS cluster install, not running anything special yet. You can confirm this by exec'ing into your efs-provisioner pod with a shell (you will have to install bash since it's alpine, run "apk add --no-cache bash" and then shell in with /bin/bash). Once you're in, run "/efs-provisioner". You may start seeing this output:
If you do, you have the same issue as I did. The quick fix is to give the cluster-admin role to the default service account. Of course, depending on your environment and security, you may need a more elaborate fix. If you elect to go the easy way, you can simply apply this:
After I did this, the output from the provisioner immediately became successful and started creating volumes. |
There is also RBAC configuration in the repo for the EFS provisioner: https://github.com/kubernetes-incubator/external-storage/blob/master/aws/efs/deploy/rbac.yaml |
@ParaSwarm - Thank you so much for posting the detailed guide. Note that for me, I ran I am running the provisioner in a namespace (
I am using the RBAC config from the This is in an EKS cluster, running 1.11.5. |
So turns out the issue for me was related to #953 — there are some missing rules in the RBAC Adding the following lines to the rules in the - apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"] (see https://github.com/helm/charts/pull/9127/files#diff-1f3ae64e932358240df168628073a894R25) I also added a ---
apiVersion: v1
kind: ServiceAccount
metadata:
name: efs-provisioner
namespace: default And attached that ...
spec:
serviceAccount: efs-provisioner
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
... Finally, I had to put the |
@geerlingguy This worked for me. Thank you a ton! |
@geerlingguy also worked for me, thanks for the great explanation! |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I think that it's your RBAC configuration issue. If you specify some namespace parameters in your RBAC yaml file, you can refer to this file: https://github.com/kubernetes-incubator/external-storage/blob/master/nfs/deploy/kubernetes/rbac.yaml. Only specify namespace parameter in ClusterRoleBinding/RoleBingding Subjects section. Also don't forget to add endpoints permission in your ClusterRole resource. |
this working to me https://github.com/kubernetes/kubernetes/issues/65937 |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@liozzazhang thanks, that fixed my problem! |
Hi,
Currently I'm trying to get the nfs-client-provisioner running on k8s v1.10.1, and use your giving yaml for testing with rbac. As I deploy test-claim.yaml, PVC always pending with below info:
Normal ExternalProvisioning 2m (x143 over 37m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuserim.pri/ifs" or manually created by system administrator
I check pod nfs-client-provisioner, it is successful:
root@k8s-master1:/data/nfs_file# kubectl logs nfs-client-provisioner-5bff76dd6c-j2mpg
I0507 07:22:12.743160 1 controller.go:407] Starting provisioner controller 5c858a43-51c7-11e8-84df-0a580af40124!
I also refer #174, link " https://kubernetes.io/docs/admin/kube-controller-manager/ " is expired , and I check with kube-controller-manager.yaml, --controllers like below:
I don't know whether this is the issue. Anyone can help? Thanks in advance.
The text was updated successfully, but these errors were encountered: