Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better control for Critical Pods #125

Closed
AndresPineros opened this issue Dec 14, 2018 · 4 comments
Closed

Better control for Critical Pods #125

AndresPineros opened this issue Dec 14, 2018 · 4 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@AndresPineros
Copy link

AndresPineros commented Dec 14, 2018

Right now the control to mark pods as critical is very basic and requires doing changes in many pods' annotation.

Proposal 1 - Non-critical annotation
If I have 100 pods but I want the descheduler to consider "non-critical" only 20, that means I have to add annotations to 80 pods. We could have a "non-critical" annotation to only mark 20 pods. This could be controlled with an argument. --non-critical-pod-matcher=true (default false).

Proposal 2 - Consider current labels as critical
If I already have an annotation in my running applications that I know identifies a set of critical pods, it would be nice to be able to say "Pods with this custom annotation and value are considered critical". With this, no changes would have to be applied at all to make descheduler run. Personally, I have an annotation called "layer" with values (backend|monitoring|data|frontend). I consider my data and monitoring Pods critical, if I already have this annotation, why add another?

It could be done with --extra-critical-annotations="layer=data,layer=monitoring,k8s-app=prometheus" . And if --non-critical-pod-matcher is set to true, then --extra-non-critical-annotations="...."

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 27, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

3 participants