-
Notifications
You must be signed in to change notification settings - Fork 38.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cleanup useless null pointer check about nodeInfo.Node() from snapshot for in-tree plugins #117834
cleanup useless null pointer check about nodeInfo.Node() from snapshot for in-tree plugins #117834
Conversation
/retest |
/assign @Huang-Wei PTAL, thanks. |
0457124
to
49169da
Compare
8152163
to
2b1a2d7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test idea lgtm
d696ca4
to
b0ba0b0
Compare
@Huang-Wei @alculquicondor Thanks for multiple suggestions. The testing code has been adjusted, PTAL, thanks a lot. |
6408253
to
6f5dd62
Compare
leaving the review of the test to @Huang-Wei |
pkg/scheduler/schedule_one_test.go
Outdated
createPodIndex++ | ||
} | ||
} | ||
go wait.Until(createPodsOneRound, 14*time.Millisecond, ctx.Done()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it intentional to have it 14 ms here, and 15 ms in the deleteNodes goroutine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I originally thought that this setting could increase the probability of collisions between processes deleteNodes
to delete node with createPods
to trigger scheduler do snapshot. Which is helpful to find the nil node panic.
But it doesn't seem to have much impact now. So I unified them.
pkg/scheduler/schedule_one_test.go
Outdated
// capture the events to wait all pods to be scheduled at least once | ||
allWaitSchedulingPods := sets.NewString() | ||
for i := 0; i < waitSchedulingPodNumber; i++ { | ||
allWaitSchedulingPods.Insert(fmt.Sprintf("pod%d", i)) | ||
} | ||
var wg sync.WaitGroup | ||
wg.Add(waitSchedulingPodNumber) | ||
stopFn, err := broadcaster.StartEventWatcher(func(obj runtime.Object) { | ||
e, ok := obj.(*eventsv1.Event) | ||
if !ok || (e.Reason != "Scheduled" && e.Reason != "FailedScheduling") { | ||
return | ||
} | ||
if allWaitSchedulingPods.Has(e.Regarding.Name) { | ||
wg.Done() | ||
allWaitSchedulingPods.Delete(e.Regarding.Name) | ||
} | ||
}) | ||
if err != nil { | ||
t.Fatal(err) | ||
} | ||
defer stopFn() | ||
|
||
wg.Wait() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is logically accurate. But it seems it'd make this UT a bit slow:
$ go test ./pkg/scheduler -run TestSchedulerGuaranteeNonNilNodeInSchedulingCycle
ok k8s.io/kubernetes/pkg/scheduler 1.771s
I don't think a single UT would cost ~2s is a good idea... Do you think if we can achieve the same goal by simply plumbing a counter (make it atomic and plus 1 for each call) in the fake plugin, and check the completion on the counter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the execution time of this UT should be optimized. And I found that most of the time was spent waiting for the func createPodsOneRound
to be called, as calculated by the following:
(waitSchedulingPodNumber / createPodNumberPerRound) * waitPeriod = (2000 / 25) * 14ms = 1.12s
Then, I update those parameters as following:
waitSchedulingPodNumber
from 2000 to 200createPodNumberPerRound
from 25 to 50- waitPeriod from 14ms to 10ms
(waitSchedulingPodNumber / createPodNumberPerRound) * waitPeriod = (200 / 50) * 10ms = 0.04s
And, the overall UT time consumption on my Mac is 0.19s
=== RUN TestSchedulerGuaranteeNonNilNodeInSchedulingCycle
--- PASS: TestSchedulerGuaranteeNonNilNodeInSchedulingCycle (0.19s)
PASS
In this case, I am thinking of keeping the fake plugin as it is now~
6f5dd62
to
17661a6
Compare
…t for in-tree plugins
17661a6
to
ed26fcf
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/triage accepted
/lgtm
/approve
Thanks @NoicFank !
LGTM label has been added. Git tree hash: 89b8c09b42a6776e0ae4bfef837cebc0ccff17f5
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Huang-Wei, NoicFank The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Since nodeInfo.Node() is guaranteed to be not nil for all the nodes in the snapshot, I think it's safe to cleanup null pointer check about nodeInfo.Node() for in-tree plugins.
Besides, currently some in-tree plugins directly use nodeInfo.Node() without null pointer check while others do not, so we unify the plugins to not check null pointer about nodeInfo.Node().
Which issue(s) this PR fixes:
Fixes # NONE
Special notes for your reviewer:
NONE
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: