-
Notifications
You must be signed in to change notification settings - Fork 328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dealing with pvs and pvcs on deleted nodes #201
Comments
Basically I never want to write thing again:
|
yes, this is a general issue.
I agree a cloud controller is required. I've written up a proposal: https://docs.google.com/document/d/1SA9epEwA3jPwibRV0ccQwJ2UfZXoeUYKyNxNegt0vn4 for this. |
Thank you for the link, I've added a thought to the document there. Is a prototype under way? |
As an intermediate step, would it help if we put an OwnerRef on the PV pointing to the Node object? PVC protection will prevent the PV from actually being deleted, but if a user comes in a deletes the PVC, then the PV object will get cleaned up instead of taking up cycles from the PV controller (which every few seconds goes through all PVs in the cluster). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale
…On Mon, 14 Dec 2020, 03:03 fejta-bot, ***@***.***> wrote:
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta
<https://github.com/fejta>.
/lifecycle stale
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#201 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AADZ7XSHS24ULCLXAHNEVALSUVXA3ANCNFSM4OCKP5MA>
.
|
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale. |
/help |
@cdenneen: Please ensure the request meets the requirements listed here. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
A maybe better scripts, depending on your needs:
|
From k8s 1.23 there a feature for gracefull shutdown of node and from 1.24 none-gracefull shutdown. |
It seems that this issue has been fixed by #385, but I think by the feature of gracefull node shutdown is a good idea |
Yes, but It is better to go with gracefull shutdown of nodes This feature allows nodes in our Kubernetes cluster to shut down in an orderly manner. Instead of abruptly stopping, the node will notify Kubernetes, giving it time to safely relocate workloads to other nodes. It will protect from Data loss aswell. Where as in non graceful shutdown of nodes abrupt and can lead to data loss, application downtime, and disrupted services. |
The issue is addressed and resolved in #385 |
When a node is deleted - either due to failure or due to an autoscale event removing it, local static volumes on it become permanently destroyed. But the metadata for those pv's, and for any pvc's referencing them, are not.
This is frustrating, because we have to take care not to reuse node names, yet in AKS, node names are deterministic and reused commonly, as you can see from these sample names:
This leads to rogue volumes and claims resurrecting themselves back when a nodepool is scaled up again - whether automatically or not.
Describe the solution you'd like in detail
I would like tackle this from two angles:
To prevent this zombie resurrect threat I would like to extend the topology key with some sort of uuid for the node, for instance the Machine ID: (7b59f36627eb48f0af387b76c602cf9b from one of those examples); though this may require changes outside of this sig and thus be a future work - since we cannot benefit from things that we can't deploy in AKS.
To prevent the leak in the first place, I would like to have a controller which watches nodes, and when a node is removed from the cluster, removes all the claims and pvs that were on that node. This would be a separate controller, so that it can be opt-in rather than run implicitly in the existing daemonsets.
Describe alternatives you've considered
I have not considered any alternatives.
Additional context
The text was updated successfully, but these errors were encountered: