You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While upgrading / rolling update on the deployment, the system tries to create new pod, attach and then shutdown old pods. In this process, new replica sets are unable to attach to volume because old ones are already attached.
Error :
MountVolume.SetUp failed for volume "myservice-db" : mount command failed, status: Failure, reason: myservice-db is existed and is attached
The text was updated successfully, but these errors were encountered:
It sounds like a problem with Kubernetes rolling update and persistent volumes which only supports Access Mode ReadWriteOnce (which is basically all the block storage solutions, including EBS, Google Persistent Disk, AzureDisk, see here)
In its current form, it cannot update pods with attached volume. According to rolling update design doc, Kubernetes will start the new pod first, then stop the old pod. Since the volume attached to the pod is RWO, it cannot be attached again to the new pod before it was detached from the old one first.
A similar issue was filed with Azure at kubernetes/kubernetes#52236 , with a recommendation to use AzureFile instead of AzureDisk.
I just realize you can use statefulset instead of deployment, which is aware of the different PVCs associated with the pods. And it will do the right thing - stop the old one, then start the new one with the same volume.
While upgrading / rolling update on the deployment, the system tries to create new pod, attach and then shutdown old pods. In this process, new replica sets are unable to attach to volume because old ones are already attached.
Error :
The text was updated successfully, but these errors were encountered: