Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure valid priority for control plane components #22217

Closed

Conversation

derekwaynecarr
Copy link
Member

The test ensures that pods that run on the control plane have an appropriate priority.

Pods that run on each node should have "system-node-critical".
Pods that run on just the control plane should have "system-cluster-critical".

This is important when we co-locate workers with masters and we need to ensure scheduling.

@openshift-ci-robot openshift-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Mar 2, 2019
@openshift-ci-robot
Copy link

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: derekwaynecarr

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 2, 2019
@derekwaynecarr
Copy link
Member Author

A number of BZs will be needed for this to pass.

/assign @smarterclayton @ravisantoshgudimetla

continue
}

if pod.Spec.PriorityClassName != "system-cluster-critical" && pod.Spec.PriorityClassName != "system-node-critical" {
Copy link
Contributor

@ravisantoshgudimetla ravisantoshgudimetla Mar 2, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sometimes, we are noticing eventhough priorityClass is cluster-critical, priority value is not being assigned. This is similar kind of bug you noticed for static pods(kubernetes/kubernetes#74222). So, perhaps checking for priority value also is helpful here.

var _ = Describe("[Feature:Platform][Smoke] Managed cluster should", func() {
f := e2e.NewDefaultFramework("operators")

It("should ensure control plane pods specify a priority", func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Describe() and the It() both have a "should". You're going to end up with "Managed cluster should should ensure..."

@jwforres
Copy link
Member

jwforres commented Mar 5, 2019

aravindhp added a commit to aravindhp/operator-marketplace that referenced this pull request Mar 5, 2019
- Specify system-cluster-critical priority for operator
- Blocks passing smoke tests for ensuring control plane pods always
schedule

See: openshift/origin#22217
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1685331
@jianzhangbjz
Copy link
Contributor

/cc

aravindhp added a commit to aravindhp/operator-marketplace that referenced this pull request Mar 5, 2019
- Specify system-cluster-critical priority for operator
- Blocks passing smoke tests for ensuring control plane pods always
schedule

See: openshift/origin#22217
Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1685331
@derekwaynecarr
Copy link
Member Author

/retest

2 similar comments
@derekwaynecarr
Copy link
Member Author

/retest

@derekwaynecarr
Copy link
Member Author

/retest

@derekwaynecarr
Copy link
Member Author

/test e2e-aws

1 similar comment
@derekwaynecarr
Copy link
Member Author

/test e2e-aws

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 7, 2019
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 6, 2019
@jianzhangbjz
Copy link
Contributor

/remove-lifecycle rotten

@openshift-ci-robot openshift-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 15, 2019
@jianzhangbjz
Copy link
Contributor

@scolange ^^

@jianzhangbjz
Copy link
Contributor

/retest

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci-robot openshift-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 10, 2020
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci-robot openshift-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 9, 2020
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci-robot
Copy link

@openshift-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants