New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tests for ignoring scheduler processing #121783
Add tests for ignoring scheduler processing #121783
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Hi @atwamahmoud. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Have you confirmed that these tests are running correctly? |
cleanupFunc := ReserveMemoryWithSchedulerName(ctx, f, "memory-reservation", replicaCount, reservedMemory, false, 1, schedulerName) | ||
defer cleanupFunc() | ||
// Verify that cluster size is the same | ||
ginkgo.By(fmt.Sprintf("Waiting for scale up hoping it won't happen, sleep for %s", scaleUpTimeout.String())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC we are not actually sleeping here, while we should.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, mb I forgot to keep it consistent with the prev test, Should be alright now
Yes, they behave as expected |
/lgtm |
LGTM label has been added. Git tree hash: 9e6c3d0c5911585537969632f81a2f0bab0d06fd
|
/lgtm |
LGTM label has been added. Git tree hash: 4554a5b58d04576342b6f0dfcbd31ee845c5e431
|
test/e2e/feature/feature.go
Outdated
@@ -129,6 +129,7 @@ var ( | |||
Windows = framework.WithFeature(framework.ValidFeatures.Add("Windows")) | |||
WindowsHostProcessContainers = framework.WithFeature(framework.ValidFeatures.Add("WindowsHostProcessContainers")) | |||
WindowsHyperVContainers = framework.WithFeature(framework.ValidFeatures.Add("WindowsHyperVContainers")) | |||
ClusterScaleUpBypassScheduler = framework.WithFeature(framework.ValidFeatures.Add("ClusterScaleUpBypassScheduler")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect tabulation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, mb. Should be ok now
685cbc0
to
f9388c8
Compare
/ok-to-test |
f9388c8
to
6073d1c
Compare
ginkgo.DeferCleanup(ReserveMemoryWithSchedulerName(ctx, f, "memory-reservation", replicaCount, reservedMemory, false, 1, nonExistingBypassedSchedulerName)) | ||
// Verify that cluster size is the same | ||
ginkgo.By(fmt.Sprintf("Waiting for scale up hoping it won't happen, sleep for %s", scaleUpTimeout.String())) | ||
time.Sleep(scaleUpTimeout) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why I a sleep?
can't we do an active loop with a wait.Poll per example?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mainly to be consistent with other test(s) that doesn't expect a scale up/down, in this test (and the next one) we're expecting no scale-up/down.
for example this test https://github.com/kubernetes/kubernetes/pull/121783/files/6073d1cd3d58384d12a750bda749ff1922812be3#diff-4f7cc8ec3b56aa879a019633705140fea00aafda1495d321704f92bd31ec6468R1001 doesn't expect a scale down so it sleeps for a while and then checks for the size
We can update it however to use a Poll up to a timeout and update other tests that sleep as well
Updated to use gomega.Consistently
instead of sleeping
4209f7c
to
73565cd
Compare
sizeFunc := func(size int) bool { | ||
return size == nodeCount | ||
} | ||
gomega.Consistently(ctx, func() error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consistently or Eventually? https://onsi.github.io/gomega/#eventually
/hold cancel Technically look correct, someone should review the test logic as I;m not familiar with this feature |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aojea, atwamahmoud The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
/lgtm |
LGTM label has been added. Git tree hash: 20c92c675ac2502c6ac11d3f90c6cb310e808ead
|
} | ||
framework.ExpectNoError(WaitForClusterSizeFuncWithUnready(ctx, f.ClientSet, sizeFunc, scaleUpTimeout, 0)) | ||
}) | ||
f.It("shouldn't scale up when unprocessed pod is created and is going to be schedulable", feature.ClusterScaleUpBypassScheduler, func(ctx context.Context) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This test should have been marked as "slow" because it blocks for 5 minutes (= scaleUpTimeout
).
@@ -132,6 +132,7 @@ var ( | |||
Windows = framework.WithFeature(framework.ValidFeatures.Add("Windows")) | |||
WindowsHostProcessContainers = framework.WithFeature(framework.ValidFeatures.Add("WindowsHostProcessContainers")) | |||
WindowsHyperVContainers = framework.WithFeature(framework.ValidFeatures.Add("WindowsHyperVContainers")) | |||
ClusterScaleUpBypassScheduler = framework.WithFeature(framework.ValidFeatures.Add("ClusterScaleUpBypassScheduler")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the future please keep this in alphabetical order (#123260 will make that more obvious).
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Adds 3 E2E tests to verify behaviour of
Bypassing scheduler processing
The tests are labeled with
ClusterScaleUpBypassScheduler
feature so they can be easily ignored/focused since without enabling the feature described inBypassing scheduler processing the tests will fail.
The tests use a scheduler name of
non-existing-bypassed-scheduler
.Does this PR introduce a user-facing change?