Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create PriorityClassName test #82546

Closed
Closed
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
65 changes: 65 additions & 0 deletions test/e2e/scheduling/preemption.go
Expand Up @@ -38,6 +38,7 @@ import (
e2enode "k8s.io/kubernetes/test/e2e/framework/node"
e2epod "k8s.io/kubernetes/test/e2e/framework/pod"
"k8s.io/kubernetes/test/e2e/framework/replicaset"
imageutils "k8s.io/kubernetes/test/utils/image"

"github.com/onsi/ginkgo"
"github.com/onsi/gomega"
Expand Down Expand Up @@ -467,6 +468,70 @@ var _ = SIGDescribe("PreemptionExecutionPath", func() {
}
}
})
// this test makes sure that Pods with a higher PriorityClass via their PriorityClassName start earlier than Pods with a lower PriorityClassName
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This statement doesn't look right. PriorityClass only takes into effect when the incoming pod can't fit on any node - when the so-called "preemption" happens.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ginkgo.It("should ensure Pods using a PriorityClassName which is higher begin first", func() {
podNamesSeen := []string{}
stopCh := make(chan struct{})
pods := []v1.Pod{}
// define Pod specs - order in from highest to lowest priority
ginkgo.By("Defining Pod specs")
for i := 4; i >= 1; i-- {
pods = append(pods, v1.Pod{
ObjectMeta: metav1.ObjectMeta{
Name: fmt.Sprintf("priority-class-p%v", i),
},
Spec: v1.PodSpec{
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The PodSpec doesn't claim any type of resource usage. So they won't test preemption at all.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, is there a resource which would be recommended to use then?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Containers: []v1.Container{{
Name: "priority-class",
Image: imageutils.GetE2EImage(imageutils.Agnhost),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why using this image?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I chose this image because it's platform agnostic

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are the defaults now and the correct images to use.

Command: []string{"sleep", "3600"},
}},
PriorityClassName: fmt.Sprintf("p%v", i),
RestartPolicy: v1.RestartPolicyNever,
},
})
}
// watch for when new pods come up
_, podController := cache.NewInformer(
&cache.ListWatch{
ListFunc: func(options metav1.ListOptions) (runtime.Object, error) {
obj, err := f.ClientSet.CoreV1().Pods(ns).List(options)
return runtime.Object(obj), err
},
WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
return f.ClientSet.CoreV1().Pods(ns).Watch(options)
},
},
&v1.Pod{},
0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Been thinking about this one... is 0 an appropriate value?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a fan of setting 0 as the syncPeriod meant to be the interval of resyncing in client (informer cache) side other than API Server side.

cache.ResourceEventHandlerFuncs{
AddFunc: func(obj interface{}) {
if pod, ok := obj.(*v1.Pod); ok {
fmt.Println(fmt.Sprintf("Seen Pod %v/%v: %v", len(podNamesSeen)+1, len(pods), pod.Name))
podNamesSeen = append(podNamesSeen, pod.Name)
}
},
},
)

go podController.Run(stopCh)
defer close(stopCh)

ginkgo.By("Creating Pods with priority")
for podNum := 0; podNum < len(pods); podNum++ {
podName := pods[podNum].ObjectMeta.Name
fmt.Println(fmt.Sprintf("Creating Pod %v/%v: %v", podNum+1, len(pods), podName))
_, err := f.ClientSet.CoreV1().Pods(f.Namespace.Name).Create(&pods[podNum])
framework.ExpectNoError(err, "Failed creating Pod")
}
fmt.Println(fmt.Sprintf("Pods seen so far: %v", podNamesSeen))
BobyMCbobs marked this conversation as resolved.
Show resolved Hide resolved
framework.ExpectEqual(len(podNamesSeen), len(pods), "PodsSeen doesn't match Pods created")

for podNum := 0; podNum < len(pods); podNum++ {
podName := pods[podNum].ObjectMeta.Name
framework.ExpectEqual(podNamesSeen[podNum], podName, "Pods preempted in incorrect order")
}
})
})

type pauseRSConfig struct {
Expand Down