New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Handle volume scheduling when nodes are shutdown #31
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/close This probably belongs in SIG node going forward |
@andrewsykim: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Currently if a node gets shutdown, pods using volumes don't get rescheduled, Since we don't know if the volumes are still being used.
Two solutions:
Create a flow that has an interlock between node lifecycle controller, taintManager, PodGC, attach_detach_controller and kubelet, one of the drawbacks of this solution is that we need to tie taint removal to finishing the eviction (more specifically to finishing volume detach)
Rely on nodeReadiness Gate and let cloud provider implement a node condition shutdown, act upon it to detach the volume and remove it, one of the drawbacks of relying on condition is that we cannot tolerate them.
cc'ing folks involved for thoughts @smarterclayton @liggitt @andrewsykim @jingxu97 and @yujuhong
/assign
The text was updated successfully, but these errors were encountered: