Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create PriorityClassName test #82546

Open
wants to merge 4 commits into
base: master
from

Conversation

@BobyMCbobs
Copy link
Contributor

commented Sep 10, 2019

What type of PR is this?
/kind cleanup

What this PR does / why we need it:
Creates PriorityClassName test

Which issue(s) this PR fixes:
Fixes #82545

Special notes for your reviewer:
Umbrella issue: #82312

Does this PR introduce a user-facing change?:

NONE

/area testing

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Sep 10, 2019

@BobyMCbobs: The label(s) area/testing cannot be appled. These labels are supported: api-review, community/discussion, community/maintenance, community/question, cuj/build-train-deploy, cuj/multi-user, platform/aws, platform/azure, platform/gcp, platform/minikube, platform/other

In response to this:

What type of PR is this?
/kind cleanup

What this PR does / why we need it:
Creates PriorityClassName test

Which issue(s) this PR fixes:
Fixes #82545

Special notes for your reviewer:
Umbrella issue: #82312

Does this PR introduce a user-facing change?:

NONE

/area testing

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Sep 10, 2019

Hi @BobyMCbobs. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

commented Sep 10, 2019

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: BobyMCbobs
To complete the pull request process, please assign k82cn
You can assign the PR to them by writing /assign @k82cn in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@alejandrox1
Copy link
Contributor

left a comment

/ok-to-test
/priority backlog
/sig scheduling

test/e2e/scheduling/preemption.go Outdated Show resolved Hide resolved
@BobyMCbobs

This comment has been minimized.

Copy link
Contributor Author

commented Sep 10, 2019

/test pull-kubernetes-e2e-gce-device-plugin-gpu

test/e2e/scheduling/preemption.go Outdated Show resolved Hide resolved
@Huang-Wei
Copy link
Member

left a comment

The statement of "ensure Pods using a PriorityClassName which is higher begin first" isn't true. Please clarify what you want to test, then I can help to come up with a
meaningful test case.

@@ -467,6 +468,71 @@ var _ = SIGDescribe("PreemptionExecutionPath", func() {
}
}
})
// this test makes sure that Pods with a higher PriorityClass via their PriorityClassName start earlier than Pods with a lower PriorityClassName

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Sep 11, 2019

Member

This statement doesn't look right. PriorityClass only takes into effect when the incoming pod can't fit on any node - when the so-called "preemption" happens.

Spec: v1.PodSpec{
Containers: []v1.Container{{
Name: "priority-class",
Image: imageutils.GetE2EImage(imageutils.Agnhost),

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Sep 11, 2019

Member

Why using this image?

This comment has been minimized.

Copy link
@BobyMCbobs

BobyMCbobs Sep 11, 2019

Author Contributor

I chose this image because it's platform agnostic

ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("priority-class-p%v", i),
},
Spec: v1.PodSpec{

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Sep 11, 2019

Member

The PodSpec doesn't claim any type of resource usage. So they won't test preemption at all.

This comment has been minimized.

Copy link
@BobyMCbobs

BobyMCbobs Sep 11, 2019

Author Contributor

OK, is there a resource which would be recommended to use then?

@Huang-Wei

This comment has been minimized.

Copy link
Member

commented Sep 11, 2019

@BobyMCbobs @alejandrox1 May I know in which aspect the spec.priorityClassName should be covered in a conformance test?

@alejandrox1

This comment has been minimized.

Copy link
Contributor

commented Sep 11, 2019

Thank you for your comments @Huang-Wei !
From the looks of #82312 it seems that we just want to test the "general" behavior of pod priority and preemption.
Specifically schedulingorder:

scheduler orders pending Pods by their priority and a pending Pod is placed ahead of other pending Pods with lower priority in the scheduling queue. As a result, the higher priority Pod may be scheduled sooner than Pods with lower priority if its scheduling requirements are met. If such Pod cannot be scheduled, scheduler will continue and tries to schedule other lower priority Pods.

So to make this test valid, the pods being created here should be first marked as pending (not enough space for them to run in the cluster).
D you know of a good way of doing this @Huang-Wei or of a better alternative ?

@Huang-Wei

This comment has been minimized.

Copy link
Member

commented Sep 11, 2019

@alejandrox1 Ah I see why this case is composed. Actually, only upon a high-churn & high-utilized cluster - in other words, a lot of pods are pending in the internal queue - pods with high priority get picked up by scheduler earlier than low priority ones even if the high priority one is created after low priority one. So it's possible to simulate this in a unit test/integration test, but will be a little challenging to simulate in an e2e test.

Since this is an internal behavior/implementation, I suppose it doesn't reach the bar of being a conformance test. Any thoughts?

},
},
&v1.Pod{},
0,

This comment has been minimized.

Copy link
@alejandrox1

alejandrox1 Sep 11, 2019

Contributor

Been thinking about this one... is 0 an appropriate value?

This comment has been minimized.

Copy link
@Huang-Wei

Huang-Wei Sep 11, 2019

Member

I'm a fan of setting 0 as the syncPeriod meant to be the interval of resyncing in client (informer cache) side other than API Server side.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.