You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
What happened:
in edge case, e.g. after etcd restore, the <disk1, LUN0> mapping already exists while actually disk1 is not attached to the node, and if another pod mounted with disk2 is scheduled to the node, it would occupy the LUN0, then both pods would consume disk1 in LUN0.
CSI driver could avoid such issue in disk attachment when disk2 is being attached to the node, the CSI driver controller could check the volumeattachments whether LUN0 is used or not by k8s controller, if it's already allocated logically, then the controller should allocate other empty LUN num.
below is an example of volumeattachment, it has <disk1, LUN0> tied with a node name.
What happened:
in edge case, e.g. after etcd restore, the <disk1, LUN0> mapping already exists while actually disk1 is not attached to the node, and if another pod mounted with disk2 is scheduled to the node, it would occupy the LUN0, then both pods would consume disk1 in LUN0.
CSI driver could avoid such issue in disk attachment when disk2 is being attached to the node, the CSI driver controller could check the volumeattachments whether LUN0 is used or not by k8s controller, if it's already allocated logically, then the controller should allocate other empty LUN num.
What you expected to happen:
How to reproduce it:
Anything else we need to know?:
Environment:
kubectl version
):uname -a
):The text was updated successfully, but these errors were encountered: