New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubelet nestedPendingOperations may leak operation lead same pv not to do mount or umount operation #109047
Comments
@Dingshujie: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig storage |
/sig node |
+1 meet the same problem |
I don't know much about volumeManager. It seems that there is a problem with the override mechanism. |
also don't know much about design history background, but comments in volume manager seems to conflict with each other among UmountDevice and UnmountVolume. // UnmountVolume
// All volume plugins can execute unmount/unmap for multiple pods referencing the
// same volume in parallel
podName := volumetypes.UniquePodName(volumeToUnmount.PodUID)
return oe.pendingOperations.Run(volumeToUnmount.VolumeName, podName, "" /* nodeName */, generatedOperations)
// UnmountDevice
// Avoid executing unmount/unmap device from multiple pods referencing
// the same volume in parallel
podName := nestedpendingoperations.EmptyUniquePodName
return oe.pendingOperations.Run(deviceToDetach.VolumeName, podName, "" /* nodeName */, generatedOperations) need sameone who familar with it to explain。
yes, must discuss rules for the existence of an operation。 |
cc @gnufied |
Suppose there are the following cases:
So we should improve the isOperationExists to satify case 2, that is, the intermediate state Maybe since this pr: #28939, this logic remains the same. |
/remove-sig node for sig storage to triage |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
…nto 'tke/v1.20.6' (merge request !907) fix nestedPendingOperations mount and umount parallel bug Issue: kubernetes#109047 Cherry Pick: kubernetes#110951 详细内容见Issue,volume manager中存在某些时序性bug,导致某个PV对应的mount操作都会卡住并且和这个PV相关的pod都卡在Creating,只有重启kubelet才会恢复。
What happened?
on my scenes,create same cronjob use same sfs(the nfs storage on Huawei Cloud) type storage by pvc to do same training work. At the same time, a node will schedule many pods using the same pvc, and there will be many mount and umount tasks about this pvc executed at the same time. And we find all pods on a node at container creating state, because of waiting pvc to mount.
check log, we can't find any we can't find any operationExecutor.MountVolume started or failed log like "operationExecutor.MountVolume started for"
and also we check volume_manager_total_volumes metrics, volume is add to dsw.
analyze code, if isExpectedError, kubelet not log any thing
and operationExecutor will generate operation, add call pendingOperations Run, pendingOperations Run will check operation slice, to find whether has same operation exists, and if operation exist, will check operation's operationPending status, if equal true, means same operation now is excuting, and return AlreadyExists.
operationKey composed of volumeName, podName, nodeName
so MountVolume operationKey {volumeName, EmptyUniquePodName,EmptyNodeName}, use EmptyUniquePodName to avoid executing mount/map from multiple pods referencing the same volume in parallel
soUnmountVolume operationKey {volumeName, podUID,EmptyNodeName}, because All volume plugins can execute unmount/unmap for multiple pods referencing the same volume in parallel
now see isOperationExists func, if previousOp.podName equals to EmptyUniquePodName or current operation podName equals to EmptyUniquePodName, will return podNameMatch, mount operation can find umount operation or umount operation can find mount operation
because of isOperationExists, this mean if a volume now is excuting mount, a umount can't add to operations, and if a umount now is excuting, a mount operation can't add to operations, but multiple pods referencing the same volume can add multiple umount operation into operation, may lead operation leak.
t0 two pod do umount volume operation
t1 pod1 umount failed, update pending to false
t2 pod3 add, operation excuter add mount operation, will use index 0, override umount operation
t3 mount operation failed , update pending to false
t4 pod2 umount operation add, will find index 0, add override opertation, now ,we have two same umount operation, now we have two goroutines to do pod2 umount operations
t5 first goroutine pod2 umount failed, will update index 0 operation, update pending to false
t6 second goroutine pod2 umount success, will delete index 0 operation,
finally lead a umount opertion in cache, and this operation will lead all mount opeartion return AlreadyExistsError, and not cause any csi call.
and i think there exists some other scene can trigger this leak.
and also, when operation success, deleteOperation, will use tail element to replace, and may increase the probalility of occurrence. because isOperationExists will always pick first match one.
and also override mechanism may lead exponentialBackOff invalid, for example mount and umount take turns to excute.
What did you expect to happen?
pv can mount , pod can running
How can we reproduce it (as minimally and precisely as possible)?
reproduce step:
create same cronjob that use same pvc(use csi sfs)
and csi driver restart periodic
Anything else we need to know?
No response
Kubernetes version
Cloud provider
OS version
Install tools
Container runtime (CRI) and version (if applicable)
Related plugins (CNI, CSI, ...) and versions (if applicable)
The text was updated successfully, but these errors were encountered: