Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttachVolume.Attach failed for volume "pvc-3d1d6dfb-39c3-11e9-9185-fa163e02d826" : node has no NodeID annotation #126

Closed
darcyllingyan opened this issue Feb 26, 2019 · 13 comments

Comments

@darcyllingyan
Copy link

Hi,
I have created the CSI plugin in my kubernetes cluster, now the SC, pvc, pv has been created successfully. But when create pod to attach the volume, it always reports the error "AttachVolume.Attach failed for volume "pvc-3d1d6dfb-39c3-11e9-9185-fa163e02d826" : node "sdc-bcmt-01-edge-worker-01" has no NodeID annotation"

# kubectl get pvc
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE
csi-pvc-cinderplugin   Bound    pvc-3d1d6dfb-39c3-11e9-9185-fa163e02d826   1Gi        RWO            csi-sc-cinderplugin   107m

# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                          STORAGECLASS          REASON   AGE
pvc-00f6dddd-3994-11e9-91ba-fa163e02d826   1Gi        RWO            Delete           Terminating   default/csi-pvc-cinderplugin   csi-sc-cinderplugin            7h1m
pvc-3d1d6dfb-39c3-11e9-9185-fa163e02d826   1Gi        RWO            Delete           Bound         default/csi-pvc-cinderplugin   csi-sc-cinderplugin            107m

# kubectl get pod nginx
Events:
  Type     Reason              Age                    From                                 Message
  ----     ------              ----                   ----                                 -------
  Warning  FailedAttachVolume  34m (x34 over 101m)    attachdetach-controller              AttachVolume.Attach failed for volume "pvc-3d1d6dfb-39c3-11e9-9185-fa163e02d826" : node "sdc-bcmt-01-edge-worker-01" has no NodeID annotation
  Warning  FailedMount         3m11s (x44 over 100m)  kubelet, sdc-bcmt-01-edge-worker-01  Unable to mount volumes for pod "nginx_default(f39547a0-39c3-11e9-9185-fa163e02d826)": timeout expired waiting for volumes to attach or mount for pod "default"/"nginx". list of unmounted volumes=[csi-data-cinderplugin]. list of unattached volumes=[csi-data-cinderplugin default-token-n2jlc]
  Warning  FailedAttachVolume  66s (x23 over 31m)     attachdetach-controller              AttachVolume.Attach failed for volume "pvc-3d1d6dfb-39c3-11e9-9185-fa163e02d826" : node "sdc-bcmt-01-edge-worker-01" has no NodeID annotation

Below is the detailed log for attacher container:
attacher00.log

Thanks

@davidz627
Copy link
Contributor

Hi Darcy, what version of Kubernetes are you running on?

@darcyllingyan
Copy link
Author

Hi David,
My kubernetes version is v1.12.3.

# kubelet --version
Kubernetes v1.12.3

@davidz627
Copy link
Contributor

davidz627 commented Feb 27, 2019

@darcyllingyan This looks like a known issue that we fixed in Kubernetes 1.13. We will look into cherry-picking the change into 1.12 which would put it in the next patch release.

For now the workaround would be to remove your driver from the cluster and then deploy it again. That should solve the issue in most cases

@darcyllingyan
Copy link
Author

Hi David,
Thanks for the response!

For now the workaround would be to remove your driver from the cluster and then deploy it again. That should solve the issue in most cases

You mean I remove the the sidecar containers(external-attacher, external-provosioner, node-driver-registratar) and the driver, then deploy again? This method seems don't take effect as I have remove/deploy several times.

Thanks
Darcy

@darcyllingyan
Copy link
Author

Hi David,
Besides, I have created the csinodeinfo CRD and enable the feature-gates for CSINodeInfo, I can't get the csinodeinfo successfully.

# kubectl get crd
NAME                                             CREATED AT
csidrivers.csi.storage.k8s.io                    2019-02-19T09:16:55Z
csinodeinfos.csi.storage.k8s.io                  2019-02-26T15:39:29Z

# kubectl get csinodeinfo
No resources found.

···

@davidz627
Copy link
Contributor

@darcyllingyan what was the sequence of steps you followed? I believe in 1.12 you must:

  1. Install the CSINodeInfo CRD
  2. Deploy your driver, making sure that the driver goes through plugin registration mechanism (the uds must be dropped in the right place /var/lib/kubelet/plugins_registry/)
  3. kubectl get csinodeinfo should show your driver.

Let me know if that is not the case, there may be another bug we need to look into...

@verult
Copy link
Contributor

verult commented Feb 27, 2019

@darcyllingyan to clarify, in your top post you did not have the CRD installed and did not enable CSINodeInfo feature gate, right?

@darcyllingyan
Copy link
Author

@davidz627 ,@verult,
Yes, I follow your sequence and still the kubectl get csinodeinfo reports nothing.
Just now I delete all the related resources(crd, driver, pvc), and re-created with the sequence again.
Below is my procedure:

  1. Add the CSIDriverRegistry=true feature-gates for kube-apiserver and kubelet.
  2. The initial state after I delete all the resources, not sure why the csidrivers still exist even I deleted all the drivers( As below, the kubectl get pod -n kube-system|grep csi shows nothing).
# kubectl get crd|grep csi
csidrivers.csi.storage.k8s.io                    2019-02-19T09:16:55Z
# kubectl get pod -n kube-system|grep csi
#
  1. Create the CRD
# kubectl apply -f crd.yaml 
customresourcedefinition.apiextensions.k8s.io/csinodeinfos.csi.storage.k8s.io created
# kubectl get crd|grep csi
csidrivers.csi.storage.k8s.io                    2019-02-19T09:16:55Z
csinodeinfos.csi.storage.k8s.io                  2019-02-28T00:55:47Z
  1. Install the driver
# kubectl apply -f manifests/rbac/
serviceaccount/csi-attacher created
clusterrole.rbac.authorization.k8s.io/external-attacher-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role-psp created
secret/csi-ca-cinderplugin created
serviceaccount/csi-nodeplugin created
clusterrole.rbac.authorization.k8s.io/csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/csi-nodeplugin created
clusterrolebinding.rbac.authorization.k8s.io/csi-nodeplugin-psp created
serviceaccount/csi-provisioner created
clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role-psp created
secret/cloud-config created
serviceaccount/csi-snapshotter created
clusterrole.rbac.authorization.k8s.io/external-snapshotter-runner created
clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-role-psp created
# kubectl apply -f manifests/cinder-csi-plugin/
service/csi-attacher-cinderplugin created
statefulset.apps/csi-attacher-cinderplugin created
daemonset.apps/csi-nodeplugin-cinderplugin created
service/csi-provisioner-cinderplugin created
statefulset.apps/csi-provisioner-cinderplugin created
service/csi-snapshotter-cinder created
statefulset.apps/csi-snapshotter-cinder created

# kubectl get pod -n kube-system|grep csi
csi-attacher-cinderplugin-0                                  2/2     Running   0          31s
csi-nodeplugin-cinderplugin-5bjp5                            2/2     Running   0          31s
csi-nodeplugin-cinderplugin-fhhdv                            2/2     Running   0          30s
csi-nodeplugin-cinderplugin-fpwwg                            2/2     Running   0          30s
csi-nodeplugin-cinderplugin-z5z7f                            2/2     Running   0          30s
csi-provisioner-cinderplugin-0                               2/2     Running   0          30s
csi-snapshotter-cinder-0                                     2/2     Running   0          30s
  1. The csinodeinfo shows nothing:
# kubectl get csinodeinfos
No resources found.

Note: In the guide, minimum requirement of node-driver-registrar is v1.13 https://github.com/kubernetes-csi/node-driver-registrar#compatibility , I'm not sure whether this workaround can take effect for the node-driver-registrar?

Thanks
Darcy

@verult
Copy link
Contributor

verult commented Mar 1, 2019

The CSINodeInfo feature gate needs to be set to true on both apiserver and kubelet as well

@verult
Copy link
Contributor

verult commented Mar 1, 2019

If there's a gap in our documentation about enabling this feature gate please let us know, thanks!

@darcyllingyan
Copy link
Author

@verult ,
Thanks for the response.
Sorry for the misleading about my step 1.

  1. Add the CSIDriverRegistry=true feature-gates for kube-apiserver and kubelet.

In fact, I have enabled the CSINodeInfo(not only CSIDriverRegistry) feature-gates both on apiserver and kubelet, but it still shows nothing for the kubectl get csinodeinfos.
Below is my changing process:

  1. For the apiserver feature-gate
# cat /etc/kubernetes/manifests/kube-apiserver.yml
- --feature-gates=PodPriority=true,ExperimentalCriticalPodAnnotation=true,SCTPSupport=true,CSINodeInfo=true,CSIDriverRegistry=true
  1. For the kubelet:
# cat /etc/kubernetes/kubelet-config.yml
featureGates:
  PodPriority: true
  ExperimentalCriticalPodAnnotation: true
  RotateKubeletClientCertificate: true
  RotateKubeletServerCertificate: true
  SCTPSupport: true
  CSINodeInfo: true
  CSIDriverRegistry: true

Thanks
Darcy

@verult
Copy link
Contributor

verult commented Mar 7, 2019

Would you mind sharing your DaemonSet spec for the node deployment? I'm wondering if node-driver-registrar args are set correctly.

The other thing I could think of is GetNodeId isn't implemented correctly, but if you are using a published Cinder driver it's probably OK.

@darcyllingyan
Copy link
Author

As I don't have a Kubernetes 1.12 and only Kubernetes 1.13, will close the issue now and will try later when I have Kubernetes 1.12.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants