New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attached Volume data structure issue in volume manager's actual state #61248
Comments
cc @saad-ali |
Did #61071 resolve this issue? Should it be closed? |
@jingxu97 is this issue specifically in 1.10? Or has it existed longer than that? |
@jberkus it exists for a long time |
The possible solution is in actual_state_of_world.go, move the volume spec information into the mountedPods data structure. |
Moving |
There is a problem if we are just moving I think we have three alternative approaches to handle this problem:
|
|
@mlmhl you are thinking the spec name is also very useful so we should keep them for mount operation's log? |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Add volume spec to mountedPod in actual state of world Add volume spec into mountedPod data struct in the actual state of the world. Fixes issue #61248
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
There is #61549 which claims this issue was fixed |
/remove-lifecycle rotten |
#61549 fixed the issue. Close it |
Add volume spec into mountedPod data struct in the actual state of the world. RelatedTo: kubernetes#61248
Two or more pods are allowed to use the same volume. If pods are using in-line volume spec (directly put the volume spec into the pod spec), the volume specs in different pods' spec could have different names. With current data structure in volume manager, we might hit conflicts as explained below.
Give two pod spec example
Suppose test-p1 is created first. The pod volume dir would be /var/lib/kubelet/pods/{pod-uid}/volumes/kubernetes.io~gce-pd/test-volume-0. This volume spec will be added into actual state https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/volumemanager/cache/actual_state_of_world.go#L376
When test-p2 is created, the pod volume dir would be /var/lib/kubelet/pods/{pod-uid}/volumes/kubernetes.io~gce-pd/test-volume-1. But when it tries to add to the actual state, since the volume was added to the state for pod test-p1, the volume spec will not be updated.
The problem caused by this is when pod test-p2 is deleted, it will construct the wrong volume path which is the path for the first pod volume and fail to finish the tear down process correctly.
The text was updated successfully, but these errors were encountered: