Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Node has no NodeID annotation #392

Closed
Dremor opened this issue Feb 15, 2019 · 19 comments
Closed

Node has no NodeID annotation #392

Dremor opened this issue Feb 15, 2019 · 19 comments
Labels
area/kubernetes Kubernetes related like K8s version compatibility wontfix

Comments

@Dremor
Copy link

Dremor commented Feb 15, 2019

Hi,

I'm trying to use Longhorn in order to get persistent storage, but I'm stuck with the following error:

Normal Scheduled Successfully assigned default/test-6588947745-7q45v to serveur-1 5 minutes ago
Warning FailedAttachVolume AttachVolume.Attach failed for volume "pvc-bb7bbbc9310a11e9" : node "serveur-1" has no NodeID annotation a minute ago
Warning FailedMount Unable to mount volumes for pod "test-6588947745-7q45v_default(bbc292bd-310a-11e9-9742-fa163ec5f140)": timeout expired waiting for volumes to attach or mount for pod "default"/"test-6588947745-7q45v". list of unmounted volumes=[test3]. list of unattached volumes=[test test3 default-token-gdlm5] a minute ago

The workload is a simple Ubuntu container.

Any idea ?

Best regards

@yasker
Copy link
Member

yasker commented Feb 15, 2019

What's the longhorn and kubernetes version?

@yasker
Copy link
Member

yasker commented Feb 15, 2019

I saw there was an issue opened in Kubernetes related to restarted CSI driver. kubernetes/kubernetes#71424

@Dremor
Copy link
Author

Dremor commented Feb 20, 2019

Kubernetes version is v1.13.1-rancher1-1, Longhorn chart v0.3.3, Rancher 2.2.0-alpha6

@Dremor
Copy link
Author

Dremor commented Feb 20, 2019

Additional note : I'm using CoreOS 1967.6.0 as host.

@yasker
Copy link
Member

yasker commented Feb 26, 2019

@Dremor CoreOS? We haven't officially support CoreOS yet. Are you using Flexvolume driver instead of CSI for CoreOS?

@m-gnaedig
Copy link

m-gnaedig commented Apr 10, 2019

Hi Yasker

I have the same issue and I am testing with LongHorn 1.40, RancherOS 1.4.0, Kubernetes v1.13.5-rancher1-2, docker-1.12.6 on Rancher v2.2.1.

AttachVolume.Attach failed for volume "pvc-14053670-5b73-11e9-aff3-080027416b12" : node "node-02" has no NodeID annotation

An other volume on node-01 was Attached correctly.

@yasker
Copy link
Member

yasker commented Apr 10, 2019

Hi @ManCon

What's the Longhorn version you're using? We don't have v1.4.0 yet. Did you mean v0.4.0? RancherOS support has been introduced in Longhorn v0.4.1.

@yasker
Copy link
Member

yasker commented Apr 10, 2019

@Dremor We also introduced CoreOS support in v0.4.1

@Dremor @ManCon You can check https://github.com/rancher/longhorn/blob/master/docs/csi-config.md for details.

@m-gnaedig
Copy link

m-gnaedig commented Apr 12, 2019

Hi
I have done now some more tests with this setting:

RancherOS 1.4.0
Rancher Server Docker 18.03
Rancher v2.2.1
Kubernetes v1.13.5-rancher1-2
Node Docker-17.03.2-ce

Fore me it is working with this versions.
Thanks for your support

@tiger0425
Copy link

Docker版本 18.9.6
Kubelet版本 v1.12.0
操作系统 CentOS Linux 7
内核版本 3.10.0-957.10.1.el7.x86_64

ttachVolume.Attach failed for volume "pvc-341bbf51-75ed-11e9-bfc0-005056b4b2af" : node "crawler203" has no NodeID annotation

Dashboard/volume/pvc-341bbf51-75ed-11e9-bfc0-005056b4b2af
Volume Details
State:Detached
Health:
Frontend:Block Device
Attached Node & Endpoint :
Size:2 Gi
Actual Size:0 Bi
Base Image:
Engine Image:rancher/longhorn-engine:v0.4.1
Created:10 minutes ago
Latest Backup:

@yasker yasker added the area/kubernetes Kubernetes related like K8s version compatibility label May 14, 2019
@yasker
Copy link
Member

yasker commented May 14, 2019

@tiger0425 According to the kubernetes, it's a bug in v1.12, and should be fixed in v1.13.

As a workaround, can you try to scale deployment longhorn-driver-deployer to 0 then rescale it back to 1? This should result in Kubernetes retry the driver installation process and may workaround the bug.

@tiger0425
Copy link

thanks for you
I have done install RancherOS1.5.1,is OK

@johansmitsnl
Copy link

johansmitsnl commented Oct 24, 2019

Is there a fix for K3S V0.10.0?

My pod is not working:

Events:
  Type     Reason              Age        From                     Message
  ----     ------              ----       ----                     -------
  Normal   Scheduled           <unknown>  default-scheduler        Successfully assigned infrastructure/gitlab-75d9dc8966-vlmnf to s00
  Warning  FailedAttachVolume  8s         attachdetach-controller  AttachVolume.Attach failed for volume "pvc-69aabd34-6e04-4326-a45c-cb91a97c1415" : node "s00" has no NodeID annotation
I1024 21:30:09.746004       1 controller.go:167] Started VA processing "csi-1bf211fb1221eb3b558933873b50b4391e58a7fd4c8e96bad50e3d2ccbf9b1c3"
I1024 21:30:09.746078       1 csi_handler.go:85] CSIHandler: processing VA "csi-1bf211fb1221eb3b558933873b50b4391e58a7fd4c8e96bad50e3d2ccbf9b1c3"
I1024 21:30:09.746121       1 csi_handler.go:112] Attaching "csi-1bf211fb1221eb3b558933873b50b4391e58a7fd4c8e96bad50e3d2ccbf9b1c3"
I1024 21:30:09.746158       1 csi_handler.go:217] Starting attach operation for "csi-1bf211fb1221eb3b558933873b50b4391e58a7fd4c8e96bad50e3d2ccbf9b1c3"
I1024 21:30:09.746386       1 csi_handler.go:188] PV finalizer is already set on "pvc-69aabd34-6e04-4326-a45c-cb91a97c1415"
I1024 21:30:09.746453       1 csi_handler.go:456] Can't get CSINodeInfo s00: csinodeinfo.csi.storage.k8s.io "s00" not found
I1024 21:30:09.746499       1 csi_handler.go:323] Saving attach error to "csi-1bf211fb1221eb3b558933873b50b4391e58a7fd4c8e96bad50e3d2ccbf9b1c3"
I1024 21:30:09.758873       1 csi_handler.go:333] Saved attach error to "csi-1bf211fb1221eb3b558933873b50b4391e58a7fd4c8e96bad50e3d2ccbf9b1c3"
I1024 21:30:09.758907       1 csi_handler.go:95] Error processing "csi-1bf211fb1221eb3b558933873b50b4391e58a7fd4c8e96bad50e3d2ccbf9b1c3": failed to attach: node "s00" has no NodeID annotation

@planetf1
Copy link

I am setting the same error currently with

  • k3s 0.10.2
  • longhorn 0.6.2

In my case:

➜  ~ kubectl get pods --namespace longhorn-system
NAME                                        READY   STATUS    RESTARTS   AGE
longhorn-manager-p5l4c                      1/1     Running   0          3m8s
longhorn-manager-xlwfb                      1/1     Running   0          3m8s
engine-image-ei-3827e67c-mvkp9              1/1     Running   0          2m51s
longhorn-ui-659687f745-6s2f2                1/1     Running   0          3m7s
longhorn-manager-d9rp7                      1/1     Running   0          3m8s
engine-image-ei-3827e67c-rqd5n              1/1     Running   0          2m51s
engine-image-ei-3827e67c-5md7v              1/1     Running   0          2m51s
instance-manager-r-5bdcbe3e                 1/1     Running   0          114s
instance-manager-e-97929415                 1/1     Running   0          114s
instance-manager-e-c08fe7e2                 1/1     Running   0          114s
instance-manager-r-6cc3180c                 1/1     Running   0          114s
instance-manager-e-1dafc01d                 1/1     Running   0          113s
instance-manager-r-a6f2a6a0                 1/1     Running   0          113s
longhorn-driver-deployer-5df4d889b4-hzv7d   1/1     Running   0          3m7s
longhorn-csi-plugin-skftd                   2/2     Running   0          98s
csi-provisioner-5574bbb845-9xv2b            1/1     Running   0          98s
longhorn-csi-plugin-2nzsx                   2/2     Running   0          98s
csi-provisioner-5574bbb845-2nltj            1/1     Running   0          98s
csi-attacher-76cbf4d75b-t6fs5               1/1     Running   0          98s
longhorn-csi-plugin-4pdff                   2/2     Running   0          98s
csi-provisioner-5574bbb845-qw2fn            1/1     Running   0          98s
csi-attacher-76cbf4d75b-nbjbf               1/1     Running   0          98s
csi-attacher-76cbf4d75b-qkwtr               1/1     Running   0          98s

I have it setup as the default storage class:

➜  ~ kubectl get storageclass
NAME                 PROVISIONER             AGE
longhorn (default)   rancher.io/longhorn     3m33s
local-path           rancher.io/local-path   10m

And my versions:

➜  ~ helm version
version.BuildInfo{Version:"v3.0.0", GitCommit:"e29ce2a54e96cd02ccfce88bee4f58bb6e2a28b6", GitTreeState:"clean", GoVersion:"go1.13.4"}
➜  ~ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-14T04:24:29Z", GoVersion:"go1.12.13", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.2-k3s.1", GitCommit:"b8b17ba55f20e590df507fce333dfee13ab438c6", GitTreeState:"clean", BuildDate:"2019-10-16T05:17Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}

So I then deploy wordpress:

➜  ~ helm install wordpress bitnami/wordpress --namespace wordpress -f ~/etc/wordpress.yaml
NAME: wordpress
LAST DEPLOYED: Tue Nov 19 14:29:40 2019
NAMESPACE: wordpress
STATUS: deployed

where I've also explicitly set the storage class:

➜  ~ cat ~/etc/wordpress.yaml | grep storage
global.storageClass: longhorn
persistence.storageClass: longhorn
mariadb.master.persistence.storageClass: longhorn

The result is that my PVCs look ok, along with PCs:

➜  ~ kubectl get pvc --namespace wordpress
NAME                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
wordpress                  Bound    pvc-a5c07956-e721-4dc5-8ce6-0f0f98f21029   10Gi       RWO            longhorn       30s
data-wordpress-mariadb-0   Bound    pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d   8Gi        RWO            longhorn       30s
➜  ~ kubectl get pv --namespace wordpress
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                STORAGECLASS   REASON   AGE
pvc-a5c07956-e721-4dc5-8ce6-0f0f98f21029   10Gi       RWO            Delete           Bound    wordpress/wordpress                  longhorn                38s
pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d   8Gi        RWO            Delete           Bound    wordpress/data-wordpress-mariadb-0   longhorn                38s

That's super, except my pods aren't initialising properly:

➜  ~ kubectl get pods --namespace wordpress
NAME                       READY   STATUS              RESTARTS   AGE
svclb-wordpress-6ggmp      0/2     Pending             0          50s
svclb-wordpress-svz2s      0/2     Pending             0          50s
svclb-wordpress-wvjdj      0/2     Pending             0          50s
wordpress-66ff7978-9kqkl   0/1     ContainerCreating   0          50s
wordpress-mariadb-0        0/1     ContainerCreating   0          50s

And looking at one of them, it looks like it's the 'no NodeID annotation' issue

➜  ~ kubectl describe pod/wordpress-mariadb-0 --namespace wordpress
Name:           wordpress-mariadb-0
Namespace:      wordpress
Priority:       0
Node:           kube-node-67ca/172.31.0.57
Start Time:     Tue, 19 Nov 2019 14:29:47 +0000
Labels:         app=mariadb
                chart=mariadb-7.0.1
                component=master
                controller-revision-hash=wordpress-mariadb-56457bcf8f
                release=wordpress
                statefulset.kubernetes.io/pod-name=wordpress-mariadb-0
Annotations:    <none>
Status:         Pending
..... [truncated]
Events:
  Type     Reason              Age                From                     Message
  ----     ------              ----               ----                     -------
  Warning  FailedScheduling    <unknown>          default-scheduler        pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
  Warning  FailedScheduling    <unknown>          default-scheduler        pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
  Normal   Scheduled           <unknown>          default-scheduler        Successfully assigned wordpress/wordpress-mariadb-0 to kube-node-67ca
  Warning  FailedAttachVolume  26s (x7 over 58s)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d" : node "kube-node-67ca" has no NodeID annotation

cc: @andyjeffries

Originally opened against the cloud provider, Civo at civo/kube100#11

@planetf1
Copy link

And here's the info I see in the attacher:

I1119 14:37:06.656225       1 controller.go:167] Started VA processing "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.656269       1 csi_handler.go:85] CSIHandler: processing VA "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.656279       1 csi_handler.go:112] Attaching "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.656289       1 csi_handler.go:217] Starting attach operation for "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.656368       1 csi_handler.go:188] PV finalizer is already set on "pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d"
I1119 14:37:06.656391       1 csi_handler.go:456] Can't get CSINodeInfo kube-node-67ca: csinodeinfo.csi.storage.k8s.io "kube-node-67ca" not found
I1119 14:37:06.656405       1 csi_handler.go:323] Saving attach error to "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.663906       1 csi_handler.go:333] Saved attach error to "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.664127       1 csi_handler.go:95] Error processing "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b": failed to attach: node "kube-node-67ca" has no NodeID annotation
I1119 14:37:06.664366       1 controller.go:167] Started VA processing "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.664510       1 csi_handler.go:85] CSIHandler: processing VA "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.664589       1 csi_handler.go:112] Attaching "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.664705       1 csi_handler.go:217] Starting attach operation for "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.664910       1 csi_handler.go:188] PV finalizer is already set on "pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d"
I1119 14:37:06.665047       1 csi_handler.go:456] Can't get CSINodeInfo kube-node-67ca: csinodeinfo.csi.storage.k8s.io "kube-node-67ca" not found
I1119 14:37:06.665152       1 csi_handler.go:323] Saving attach error to "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.668368       1 csi_handler.go:333] Saved attach error to "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:06.668671       1 csi_handler.go:95] Error processing "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b": failed to attach: node "kube-node-67ca" has no NodeID annotation
I1119 14:37:09.704041       1 reflector.go:215] k8s.io/client-go/informers/factory.go:131: forcing resync
I1119 14:37:09.704152       1 controller.go:167] Started VA processing "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.704170       1 csi_handler.go:85] CSIHandler: processing VA "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.704187       1 csi_handler.go:112] Attaching "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.704199       1 csi_handler.go:217] Starting attach operation for "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.704330       1 csi_handler.go:188] PV finalizer is already set on "pvc-a5c07956-e721-4dc5-8ce6-0f0f98f21029"
I1119 14:37:09.704350       1 csi_handler.go:456] Can't get CSINodeInfo kube-master-68d5: csinodeinfo.csi.storage.k8s.io "kube-master-68d5" not found
I1119 14:37:09.704365       1 csi_handler.go:323] Saving attach error to "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.704692       1 controller.go:167] Started VA processing "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.704857       1 csi_handler.go:85] CSIHandler: processing VA "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.705055       1 csi_handler.go:112] Attaching "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.705255       1 csi_handler.go:217] Starting attach operation for "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.705804       1 csi_handler.go:188] PV finalizer is already set on "pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d"
I1119 14:37:09.706039       1 csi_handler.go:456] Can't get CSINodeInfo kube-node-67ca: csinodeinfo.csi.storage.k8s.io "kube-node-67ca" not found
I1119 14:37:09.706248       1 csi_handler.go:323] Saving attach error to "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.710682       1 csi_handler.go:333] Saved attach error to "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.711052       1 csi_handler.go:95] Error processing "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475": failed to attach: node "kube-master-68d5" has no NodeID annotation
I1119 14:37:09.711058       1 csi_handler.go:333] Saved attach error to "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.711833       1 csi_handler.go:95] Error processing "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b": failed to attach: node "kube-node-67ca" has no NodeID annotation
I1119 14:37:09.711759       1 controller.go:167] Started VA processing "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.712836       1 csi_handler.go:85] CSIHandler: processing VA "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.712936       1 csi_handler.go:112] Attaching "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.713023       1 csi_handler.go:217] Starting attach operation for "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.713136       1 csi_handler.go:188] PV finalizer is already set on "pvc-a5c07956-e721-4dc5-8ce6-0f0f98f21029"
I1119 14:37:09.713221       1 csi_handler.go:456] Can't get CSINodeInfo kube-master-68d5: csinodeinfo.csi.storage.k8s.io "kube-master-68d5" not found
I1119 14:37:09.713299       1 csi_handler.go:323] Saving attach error to "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.712350       1 reflector.go:215] k8s.io/client-go/informers/factory.go:131: forcing resync
I1119 14:37:09.713770       1 controller.go:197] Started PV processing "pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d"
I1119 14:37:09.713864       1 csi_handler.go:353] CSIHandler: processing PV "pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d"
I1119 14:37:09.713973       1 csi_handler.go:357] CSIHandler: processing PV "pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d": no deletion timestamp, ignoring
I1119 14:37:09.714090       1 controller.go:197] Started PV processing "pvc-a5c07956-e721-4dc5-8ce6-0f0f98f21029"
I1119 14:37:09.714189       1 csi_handler.go:353] CSIHandler: processing PV "pvc-a5c07956-e721-4dc5-8ce6-0f0f98f21029"
I1119 14:37:09.714287       1 csi_handler.go:357] CSIHandler: processing PV "pvc-a5c07956-e721-4dc5-8ce6-0f0f98f21029": no deletion timestamp, ignoring
I1119 14:37:09.714413       1 controller.go:167] Started VA processing "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.714550       1 csi_handler.go:85] CSIHandler: processing VA "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.714641       1 csi_handler.go:112] Attaching "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.714751       1 csi_handler.go:217] Starting attach operation for "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.714940       1 csi_handler.go:188] PV finalizer is already set on "pvc-e72ecdcd-0d67-4650-8ec9-31001600b81d"
I1119 14:37:09.715041       1 csi_handler.go:456] Can't get CSINodeInfo kube-node-67ca: csinodeinfo.csi.storage.k8s.io "kube-node-67ca" not found
I1119 14:37:09.715160       1 csi_handler.go:323] Saving attach error to "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.716094       1 csi_handler.go:333] Saved attach error to "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475"
I1119 14:37:09.716131       1 csi_handler.go:95] Error processing "csi-e713eef5c3b125cd175c25f4cca996bfad64f64e65233c7119d721a15db5d475": failed to attach: node "kube-master-68d5" has no NodeID annotation
I1119 14:37:09.718077       1 csi_handler.go:333] Saved attach error to "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b"
I1119 14:37:09.718105       1 csi_handler.go:95] Error processing "csi-cecf2951fe8e8f2eeb6e72670276aa591f860bd95d13c858ddf51c0965e1c28b": failed to attach: node "kube-node-67ca" has no NodeID annotation

@planetf1
Copy link

I repeated the test using k3s 1.0.0 and had the same issue.

@shuo-wu
Copy link
Contributor

shuo-wu commented Nov 21, 2019

I repeated the test using k3s 1.0.0 and had the same issue.

I think the issue you report is the same as #835. And this doc may help.

@planetf1
Copy link

thanks. will work with provider to try that

@stale
Copy link

stale bot commented Jan 21, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubernetes Kubernetes related like K8s version compatibility wontfix
Projects
None yet
Development

No branches or pull requests

7 participants