-
Notifications
You must be signed in to change notification settings - Fork 668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Better control for Critical Pods #125
Comments
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Right now the control to mark pods as critical is very basic and requires doing changes in many pods' annotation.
Proposal 1 - Non-critical annotation
If I have 100 pods but I want the descheduler to consider "non-critical" only 20, that means I have to add annotations to 80 pods. We could have a "non-critical" annotation to only mark 20 pods. This could be controlled with an argument.
--non-critical-pod-matcher=true
(default false).Proposal 2 - Consider current labels as critical
If I already have an annotation in my running applications that I know identifies a set of critical pods, it would be nice to be able to say "Pods with this custom annotation and value are considered critical". With this, no changes would have to be applied at all to make descheduler run. Personally, I have an annotation called "layer" with values (backend|monitoring|data|frontend). I consider my data and monitoring Pods critical, if I already have this annotation, why add another?
It could be done with
--extra-critical-annotations="layer=data,layer=monitoring,k8s-app=prometheus"
. And if--non-critical-pod-matcher
is set to true, then--extra-non-critical-annotations="...."
The text was updated successfully, but these errors were encountered: