-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
move "kubectl drain" into the server #25625
Comments
I think that applies to both autoscaling down and node upgrades. /cc @mwielgus @ihmccreery |
@roberthbailey I'm wondering is this really a common use case that worth doing? Could you explain with more details, for example, how to apply to autoscaling down? |
When removing a node from a cluster (either because an autoscaler decides to free up space or a user requests the release of resources), we should drain the node before deleting it from the cluster. If we want to build this into automation (e.g. autoscaling), then we don't want to rely on the drain command only existing in the kubectl client that is intended to be used by a human. |
@roberthbailey I agree with that, but I think the question might have been why the dry-run mode / ability to just ask which is the best node to drain but not actually drain it is useful. TBH I can't remember why you suggested it. Presumably it's because the client (e.g. autoscaler) wants to manage the drain itself? |
I think it was for upgrades. But I don't recall why that would be better than just asking the server to do it. |
I would like to see |
One model for this: you ask the system to drain N nodes, give it some parameters controlling the choice and number simultaneous etc., and it picks the nodes and does the drains gives a callback to an HTTP endpoint you specify when it is done. BTW Mesos machine maintenance model is described here |
There would need to be a timeout or a way to cancel a drain that is taking too long. |
One new compelling reason to do this is:
We now have to choose between responsibility for the correctness of these implementations (including version skew, matrix testing), disallowing them entirely, or deferring the correctness problems until later with disclaimers (i.e. create tech debt). None of these is satisfactory. |
@davidopp or @timothysc per my last message, is there any chance this could be prioritized for 1.9 or maybe next year? |
It's entirely based on resources and folks willing to show up and do the work. |
@timothysc Can I extrapolate that as of right now, no one has shown up and indicated they wish to do this work? |
Yes. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
@fabiand In sig-cloud-provider - cc @andrewsykim @cheftako |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/reopen |
@redbaron: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@yanirq: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
OK, @k8s-ci-robot , you won. Of course lack of spamming on a well understood issue just waiting to be implemented is a clear sign that issue is not relevant anymore, I am with you. Good work. |
/reopen |
@Abirdcfly: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@davidopp: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@redbaron Please go on...😂 |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
random note: @roberthbailey had suggested it might be useful to have a "dry run" mode where you just ask which is the best node to drain, or best N of some set, but don't drain it. not sure how you express that using a REST API though. also can't remember what the use case was (it might have been for autoscaling scale-down, so you know which node is best to remove?)
as an aside, we need to consolidate all the issues related to this: #7351, #6080, #6079, #3885, ...
The text was updated successfully, but these errors were encountered: