Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VolumeAttachment not marked as detached causes problems when the Node is deleted. #215

Open
msau42 opened this issue Mar 12, 2020 · 31 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@msau42
Copy link
Collaborator

msau42 commented Mar 12, 2020

In #184, we had decided that instead of marking the VolumeAttachment as detached, we would just requeue the volume to have the workqueue process it again.

However, this doesn't work in the case where the Node is deleted. In that scenario:

  1. ListVolumes() shows that volume is not attached to the node anymore
  2. ReconcileVA() sets force sync
  3. syncAttach() just tries to reattach the volume again and fails because node is gone
  4. In k/k AD controller, we try to attach to new node, but it fails on the multi-attach check because volume is still attached in asw.

What should happen is:

  1. ListVolumes() shows that volume is not attached to the node anymore
  2. We actually mark VolumeAttachment.status.attached as detached
  3. In k/k AD controller, VerifyVolumesAttached() sees that VolumeAttachment is detached, updates asw
  4. AD reconciler allows new Attach on new node to proceed.

I'm not sure the best way to fix step 2). Some suggestions I have in order of preference:

  1. We go back to actually updating VolumeAttachment in ReconcileVA() like the original PR did. But we call markAsDetached to make sure we update everything properly.
  2. We pass some more state to syncVA() so that it can markAsDetached if csiAttach failed on the force sync.
@msau42
Copy link
Collaborator Author

msau42 commented Mar 12, 2020

cc @jsafrane

@msau42
Copy link
Collaborator Author

msau42 commented Mar 12, 2020

/kind bug

@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Mar 12, 2020
@jsafrane
Copy link
Contributor

What should happen is:

  1. ListVolumes() shows that volume is not attached to the node anymore
  2. We actually mark VolumeAttachment.status.attached as detached

The volume might be already attaching. A/D controller does not know about that.

  1. In k/k AD controller, VerifyVolumesAttached() sees that VolumeAttachment is detached, updates asw

It's not possible to distinguish attaching from attached there.

  1. AD reconciler allows new Attach on new node to proceed.

IMO, A/D controller (or csi plugin) should see the Node does not exist and delete VolumeAttachment.

@msau42
Copy link
Collaborator Author

msau42 commented Mar 16, 2020

Do you see a problem if we mark VolumeAttachment as detached while it's still attaching? How is this different from the normal case when we are first attaching a volume and it's VolumeAttachment.status.attached = false?

@jsafrane
Copy link
Contributor

From external provisioner POV, it should be safe. In the worst case when both regular sync and ListVolume sync race, it marks just attached volume as detached. This causes VolumeAttachment to be synced again, ControllerPublish will be called and it will fix the VolumeAttachment status.

Still, something in the A/D controller / CSI plugin there must check the destination node is gone and delete VolumeAttachment and update ASW.

@msau42
Copy link
Collaborator Author

msau42 commented Mar 17, 2020

When a node is deleted, the combination of pods getting deleted and AD.verifyvolumesattached causes the detach

@msau42
Copy link
Collaborator Author

msau42 commented Mar 17, 2020

I forgot to mention in the initial comment, the Pod does get deleted, which should trigger detach, however AD controller doesn't because it still sees the volume mounted in asw and says it's not safe to detach. This also starts the 6 minute force detach timer.

It depends on VerifyVolumesAttached to override asw.

@jsafrane
Copy link
Contributor

It starts making sense now.

I'm thinking if it can break anything in A/D controller. If A/D VerifyVolumesAreAttached finds a volume that is detached and the pod does not exist, it will remove the pod from ASW. Who is going to delete the VolumeAttachment then? CSI plugin.Detach is not called... Should A/D controller call it?

@msau42
Copy link
Collaborator Author

msau42 commented Mar 17, 2020

Looks like VolumesAreAttached calls DeleteVolumeNode, which completely removes it from asw, so you're right, Detach won't get called and VolumeAttachment will be leaked. One possibility is to not completely remove it from asw, but mark some special state.

Another possibility is we add VolumeAttachment GC to the AD: kubernetes/kubernetes#77324

@msau42
Copy link
Collaborator Author

msau42 commented Mar 17, 2020

Another thought, should VolumeAttachment have ownerref to Node object so when the Node object gets deleted, we automatically trigger Detach?

@jsafrane
Copy link
Contributor

I'd prefer to have a proper fix instead of piling up hacks in A/D controller & external-attacher.

Node object is deleted. PodGCController deletes pods from the node. Should it delete VolumeAttachments too? That looks too CSI specific... Or should we add loop to A/D controller to force-detach from really deleted nodes sooner that in 6 minutes? This will work for all volume plugins and we don't need ListVolumes / VerifyVolumesAreAttached at all.

How probable is it that the deleted node comes back and how quickly? If someone accidentally deletes a Node object (keeping kubelet running), how quickly is it re-created? PodGCController has 40s "quarantine".

@msau42
Copy link
Collaborator Author

msau42 commented Mar 19, 2020

VerifyVolumesAreAttached is still needed for other scenarios where node still exists but volume got detached out of band, or node gets recreated with the same name.

The other thing I realized is that this problem only occurs if the Node is ungracefully deleted (without a drain).

If someone deleted the Node object, kubelet only recreates it if it restarts.

I think we should still let Pod GC invoke detach. Right now it is guarded by the existence of node, which is guarded by this check. I'm not sure what problems may happen if we delete the node from cache while we think pods are still attached to it. The other problem is that nodeDelete is only called via informer. If it fails that one time, then we never retry removing the node from cache again.

@msau42
Copy link
Collaborator Author

msau42 commented Mar 19, 2020

I discussed a bit with @saad-ali and he has concerns about using the volumeattachment.status.attached field to indicate that a volume is detached because it could also mean attach or detach could be in progress. If we want to fix this, we may need to think about having a new field that can actually indicate something is detached with nothing in progress.

Also, we probably should revisit the logic in csi.VolumesAreAttached that uses the status.attached field, and also revisit other cases where we return attached = false. That causes the volume to be removed from asw. Could that be problematic if it's actually still attached?

@jsafrane
Copy link
Contributor

concerns about using the volumeattachment.status.attached field to indicate that a volume is detached because it could also mean attach or detach could be in progress

The only signal that a volume is fully detached is that VolumeAttachment is deleted / missing and IMO only this should be used by A/D controller to check that the volume is fully detached. When VolumeAttachment exists, the volume may be attaching / detaching / fully attached.

We could add "detached and can't attach because node is gone" to VolumeAttachment, but that is safe and race free only until the node appears again.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 17, 2020
@msau42
Copy link
Collaborator Author

msau42 commented Jun 17, 2020

/lifecycle frozen

@alex-berger
Copy link

Any news on this, this is really annoying as there are so many opportunities where nodes are deleted:

  • by the cluster autoscaler
  • during cluster upgrade
  • ...
    And then we always have to manually fix dozens of pods which cannot start as they cannot attach their volumes.

@ialidzhikov
Copy link
Contributor

@msau42 , isn't this issue resolved by kubernetes/kubernetes#96617 (which is the fix for kubernetes/kubernetes#77324)?

@mitchellmaler
Copy link

Still ran into this running Kubernetes v1.21.3. The node was deleted and the VolumeAttachment was still around specifying the old node name.

@msau42
Copy link
Collaborator Author

msau42 commented Aug 9, 2021

nodes are deleted:

  • by the cluster autoscaler
  • during cluster upgrade

Those scenarios are graceful scenarios, where we highly recommend drain is done first. Draining first before deleting the node should solve at least the graceful case. Although there is a slight race that the drain process also needs to fully wait for volumes to be completely unmounted.

For the ungraceful case, the 6 minute force detach should kick in

@alex-berger
Copy link

alex-berger commented Aug 10, 2021

@msau42 this happens every now and then in our clusters and 6 minutes is a long delay in a world with 99.95% and more uptime SLAs. Also note, that we already drain the nodes and still it might happen that a node terminates immediately without proper draining.
That's reality and not just theory, and we have to deal with it. That's why we need resilient and self healing controllers, which make sure the system recovers from such error automatically within reasonable time.
After all this is the reason why everybody is moving to kubernetes. If we still want to live on assumptions and fix everything manually whenever our overly optimistic assumptions fail, then we don't need to maintain complex kubernetes systems 😁.

@galgross
Copy link

1.21.

facing the same issue in k8s 1.21.5

@gkaskonas
Copy link

Facing the same issue with the latest EKS and ebs csi controller

@avivgold098
Copy link

@jsafrane what do you think about that issue? My organization can help with the implementation

@spriya-m
Copy link

Facing the issue with Kubernetes v1.24 as well.

@irasnyd
Copy link

irasnyd commented Feb 13, 2024

Still hitting this with EKS v1.26

@karunsiri
Copy link

Still hitting this with EKS v1.29.

@nooperpudd
Copy link

nooperpudd commented Apr 15, 2024

Need to wait about 6mins to attach volume

Warning  FailedAttachVolume      8m12s  attachdetach-controller  Multi-Attach error for volume "pvc-bc4b37b5-1726-414b-aac8-ec1a01209041" Volume is already exclusively attached to one node and can't be attached to another  
                 
Normal   SuccessfulAttachVolume  2m8s   attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-bc4b37b5-1726-414b-aac8-ec1a01209041"

@fungaren
Copy link

@ivankatliarchuk
Copy link

Does anyone have experience with this or know of any ongoing efforts to address it? AWS EKS 1.26, 1.27 and most likely 1.28 affected as well.

@kbr-trackunit
Copy link

Does anyone have experience with this or know of any ongoing efforts to address it? AWS EKS 1.26, 1.27 and most likely 1.28 affected as well.

We are still running AWS EKS 1.28 and this have been a problem for a long time.

We are also running Karpenter which does kill of nodes all the time as scheduled. From time to time we see prometheus-pushgateway node getting killed.
A new node is started where the prometheus-pushgateway pod is scheduled. It fails due to the Multi-attach error which again mean that the new pod is not registered as healthy I'm not quite sure why the pvc doesn't register that the pod is being terminated. But the PV is attached to the old node according to the ebs-csi-driver until it reaches the timeout after 5 minutes.
As soon as it's released the new pod start up.

I'm unsure if this issue is strictly related to Kubernetes or if the solution is maybe to be found inside AWS ebs-csi-driver.

We have had this issue since before 1.21.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests