New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Informed strategies #489
Comments
Mike, FYI: this is an issue k/k scheduler is tackling, hope it's helpful: kubernetes/kubernetes#94009 (you can find associated PRs and the doc Aldo authored there) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
One of the most-requested features we see for the descheduler is to make it more immediately reactive to changes in cluster state, rather than relying on interval-based periodic runs.
The trouble with providing this is that each strategy is affected by different conditions, so identifying a single informer to trigger descheduler runs is difficult an inefficient. For some strategies, it may not even be possible. There is also the matter of usage, when lots of cluster updates may trigger many descheduling runs hurting performance. For this reason some users may prefer periodic runs.
To try a solution to this, I opened a proof-of-concept PR in #488. This design does a few things which I believe will start us down the path toward enabling reactive descheduling for more strategies:
StrategyController
type, which essentially wraps any existing strategy function with its own control loop that can be run in a separate thread from the main descheduler controllerStrategyController
. We use a similar design in many of our controllers in OpenShift, and the workqueue provides good error handling and retry logic which may be helpful in the future.runMode
field to the descheduler strategy type with 2 options:Default
andInformed
. A strategy can only be in one mode (it doesn't make sense to periodically run a strategy if it's already working whenever a relevant change happens in the cluster). This frees up resources for the periodic runs as well.To start, I added this feature to 2 strategies:
NodeTaints
andNodeAffinity
, because these strategies only rely on changes toNode
objects (taints only needs to run if a node is un/tainted, and affinity only needs to run if a node's labels change). I think this is the simplest start to establish the design, which can be expanded to other strategies.There is also the caveat that this requires the descheduler to be run as its own deployment (not a Job or CronJob), with
deschedulingInterval
. This is not currently a very common way to run the descheduler, but there's no reason why we couldn't promote it more. To go with this, a new manifest template for running as a deployment.Please take a look at the linked PR (#488) and give any feedback on this. I would like to target the Node strategies as a feature for the 1.21 release, and look at adding it to other strategies in future releases
/kind feature
/cc @ingvagabund @seanmalloy
The text was updated successfully, but these errors were encountered: