Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Disable matching on few selectors. Remove duplicates. #72801

Merged
merged 1 commit into from Jan 15, 2019

Conversation

@Ramyak
Copy link
Contributor

Ramyak commented Jan 11, 2019

What type of PR is this?

/kind bug

What this PR does / why we need it:
Problem: When there are 2 selectors(eg: service and replication controller), it is sufficient to match any one selector for distribution. This creates imbalance [selector match code].

Pods from previous deploys matches service selector and are counted when distributing pods across zones/nodes (Even though they do not match replicaset selector) . These pods will be deleted. After the deploy completes, the cluster is imbalanced - by zone and/or pods per node.

Fix: All selectors must match pods. Partial matches are still allowed.

Which issue(s) this PR fixes:
Fixes #71327

Special notes for your reviewer:
#71328
Splitting this into 2 reviews.

Does this PR introduce a user-facing change?:

Fix SelectorSpreadPriority scheduler to match all selectors when distributing pods.

/sig scheduling

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jan 11, 2019

Hi @Ramyak. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wgliang

This comment has been minimized.

Copy link
Member

wgliang commented Jan 11, 2019

/ok-to-test

@Ramyak Ramyak force-pushed the Ramyak:ramya/match-all-selectors branch 2 times, most recently from a50dc68 to a56a7de Jan 11, 2019

@Ramyak

This comment has been minimized.

Copy link
Contributor Author

Ramyak commented Jan 11, 2019

/test pull-kubernetes-e2e-gce-100-performance
/test pull-kubernetes-integration
/test pull-kubernetes-kubemark-e2e-gce-big

@Ramyak

This comment has been minimized.

Copy link
Contributor Author

Ramyak commented Jan 11, 2019

/assign @k82cn
/assign @bsalamat

@bsalamat
Copy link
Member

bsalamat left a comment

Thanks, @Ramyak for the fix. Could you please address my comment?

@Ramyak Ramyak force-pushed the Ramyak:ramya/match-all-selectors branch from a56a7de to f2de2b6 Jan 11, 2019

@bsalamat
Copy link
Member

bsalamat left a comment

This PR addresses "Problem 2" of #71327, but it is a diversion from the existing behavior. Our existing code gets a union of all pods that match any of the selectors. This PR changes the union to an intersection operator. While I don't have a particular scenario in mind that would break if someone uses a standard collection, it is easy to find scenarios where pods with custom labels will no longer spread properly after this change. For that reason, I would like to think a bit more about this PR before I can approve it.

@Ramyak Ramyak force-pushed the Ramyak:ramya/match-all-selectors branch from f2de2b6 to c21c57e Jan 12, 2019

@Ramyak Ramyak force-pushed the Ramyak:ramya/match-all-selectors branch from c21c57e to 339ce0e Jan 12, 2019

@bsalamat
Copy link
Member

bsalamat left a comment

/approve

I thought more about this change and given that the pattern mentioned in the issue can happen for many users, I decided to approve it.

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jan 15, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bsalamat, Ramyak

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@bsalamat
Copy link
Member

bsalamat left a comment

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm label Jan 15, 2019

@Ramyak

This comment has been minimized.

Copy link
Contributor Author

Ramyak commented Jan 15, 2019

/test pull-kubernetes-e2e-gce-100-performance
/test pull-kubernetes-integration

@k8s-ci-robot k8s-ci-robot merged commit 9661abe into kubernetes:master Jan 15, 2019

18 checks passed

cla/linuxfoundation Ramyak authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-kops-aws Job succeeded.
Details
pull-kubernetes-e2e-kubeadm-gce Skipped
pull-kubernetes-godeps Skipped
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
tide In merge pool.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.