-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test pod becoming schedulable when another pod is added or updated #92074
Test pod becoming schedulable when another pod is added or updated #92074
Conversation
Hi @nodo. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/cc @alculquicondor |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd prefer to add other tests for pod-antiaffinity like this:
- init: create 2 nodes and 2 pods with label "foo", with pod-antiaffinity to not co-exist with pod with label "foo" - so they don't get scheduled to the same node
- pod: create a regular pod with label "foo"
- update: delete one of the 2 pods we created in init()
And we can come up with another test:
- init: create 2 nodes and 2 pods with label "foo", with nodeAffinity (or nodeSelector) to land on node1/node2 accordingly
- pod: create a pod with label "foo", and with pod-antiaffinity to not co-exist with pod with label "foo"
- update: delete one of the 2 pods we created in init()
@@ -1106,7 +1106,49 @@ func TestUnschedulablePodBecomesSchedulable(t *testing.T) { | |||
return nil | |||
}, | |||
}, | |||
// TODO(#91111): Add more test cases. | |||
{ | |||
name: "other pod gets added", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name: "other pod gets added", | |
name: "pod with pod-affinity gets added", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, let's back up a bit.
The idea of these tests is to cover the different events that change the state of the cluster. Ideally, we cover all variations of events that we could think of. But since these are integrations tests, and they are expensive, we should be a bit more conservative.
Then, we can cover the cases where we now a different path is taken. That is not the case for delete: all pods are moved back to the active queue. So I'm happy to just keep the test we have that uses the Fit plugin. The rest of the affinity workflow is already covered with unit tests.
wdyt @Huang-Wei
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@nodo @alculquicondor In terms of using integration tests to only cover different paths, I'm fine with holding off the extra deletion tests.
However, I think we missed the Update path, any thoughts on adding a test like this:
- init: create a node and a pod (without any label). The pod is expected to be scheduled.
- pod: create a pod without any lane, and with pod-affinity to co-exist with label "foo". The pod is expected to be pending - just like the current test.
- update: apply label "foo" to the scheduled pod. Then it's expected to trigger cache#UpdatePod(), and the pending pod is expected to be scheduled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is that a valid case? Can you change labels in running Pods?
If so, I'm ok with having that test. Otherwise it's unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tried both plain pod and podTemplate, they're both mutable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, it makes sense to me. I will add an additional test in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't forget to update the name of the test to be more descriptive :)
pod with pod-affinity gets added
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aha! good point, thanks! :)
if err := utils.AddLabelsToNode(cs, node.Name, map[string]string{"region": "test"}); err != nil { | ||
return fmt.Errorf("cannot add labels to node: %v", err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cannot we just initialize the node with the labels?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was replicating the same logic in this other test: https://github.com/kubernetes/kubernetes/blob/master/test/integration/scheduler/predicates_test.go#L47-L58.
I could not find a test utility to create nodes with labels, do you know if there is any? If not I am not sure it's worth to add, but what do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sigh... we should really be cleaning existing mistakes, otherwise, other developers will copy them over and over.
For now, please leave a TODO here, as well as https://github.com/kubernetes/kubernetes/blob/master/test/integration/scheduler/predicates_test.go#L47-L58. A good and consistent way to craft nodes/pods is to use pkg/scheduler/testing/wrapper.go - but it's a big item.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do, I am happy to work on the cleanup after this PR.
Let's just get all test cases described by @Huang-Wei in this PR |
/ok-to-test |
/retest |
@@ -1106,7 +1106,49 @@ func TestUnschedulablePodBecomesSchedulable(t *testing.T) { | |||
return nil | |||
}, | |||
}, | |||
// TODO(#91111): Add more test cases. | |||
{ | |||
name: "other pod gets added", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't forget to update the name of the test to be more descriptive :)
pod with pod-affinity gets added
139a553
to
1e6027f
Compare
1e6027f
to
0c23caf
Compare
/approve will leave lgtm to @Huang-Wei |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alculquicondor, nodo The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm Thanks @nodo. |
/retest |
/retest |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
It adds two integration tests.
The first one is for a pod creation event. The pod we are testing is scheduled because a new pod with affinity his scheduled.
The second one is for a pod update event. The pod we are testing is scheduled because a previously scheduled pod is updated with a new label (which correspond to the affinity of the pod we are testing).
Which issue(s) this PR fixes:
Fixes of #91111
Does this PR introduce a user-facing change?: