Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix kubeadm taints to not override existing node taints #65068

Merged
merged 2 commits into from Jun 15, 2018

Conversation

@ashleyschuett
Copy link
Contributor

ashleyschuett commented Jun 13, 2018

What this PR does / why we need it:
If a node has existing taints they are being replaced with taints from the kubeadm config.

An example of this is that the uninitialized taint that kubelet sets for external cloud provider is being removed, and replaces with the master taint if set, or removed leaving the nodes taints empty if noTaintMaster=true .

None
@ashleyschuett

This comment has been minimized.

Copy link
Contributor Author

ashleyschuett commented Jun 13, 2018

@wgliang

This comment has been minimized.

Copy link
Member

wgliang commented Jun 13, 2018

/ok-to-test

@ashleyschuett

This comment has been minimized.

Copy link
Contributor Author

ashleyschuett commented Jun 14, 2018

The test that failed is

{
  "nothing missing but taint unwanted",
   kubeadmconstants.LabelNodeRoleMaster,
   []v1.Taint{kubeadmconstants.MasterTaint},
   nil,
   "{\"spec\":{\"taints\":null}}",
}

This test is actually asserting that the previous behavior is the correct behavior and goes against what the logic in this PR is. Should I remove the test or does the logic of this change need to be discussed more? I think the test should be removed as I would not expect that taints are removed because they aren't specified in the config file.

@fabriziopandini

This comment has been minimized.

Copy link
Contributor

fabriziopandini commented Jun 14, 2018

@ashleyschuett thanks for your PR!
The test fails because with your PR we are removing the possibility to delete all the current taints using mark-master.
I'm not aware of a use cases when this is required, but since this behaviour is there and there is a specific unit test better to ask
/CC @luxas @ijc
Thoughts here?

Ps. It would be great to have specific test cases for the new "merge" behaviour

@ijc

This comment has been minimized.

Copy link
Contributor

ijc commented Jun 14, 2018

The test/behaviour came in #55479 where I said in the PR intro:

The code also arranges to remove an existing taint if it is unwanted. I'm unsure if this behaviour is correct or desirable, I think a reasonable argument could be made for leaving an existing taint in place too.

Which if I understand correctly is exactly the issue here? I have no strong feelings for either behaviour and I don't recall any of the reviewers questioning the above or the behaviour (which makes me suspect they had no particular opinion also rather than explicitly agreed with it doing it that way).

So long as the noTaintMaster aspect of #55479 remains functional I don't really have an opinion on the behaviour WRT existing taints. IOW I'm fine with the behaviour being changed and the test remove or, perhaps better, adjusted for the new behaviour.

@ashleyschuett ashleyschuett force-pushed the ashleyschuett:fix/nodetaints branch from 792df72 to f440883 Jun 14, 2018

@k8s-ci-robot k8s-ci-robot added size/M and removed size/S labels Jun 14, 2018

@ashleyschuett

This comment has been minimized.

Copy link
Contributor Author

ashleyschuett commented Jun 14, 2018

I updated the test to reflect both the empty case, and the merge case. So if the kubeadm taints is nil the nodes taints should not be removed, and that if there are existing taints they should be merged with the kubeadm config.

@ashleyschuett ashleyschuett force-pushed the ashleyschuett:fix/nodetaints branch from f440883 to 3561588 Jun 14, 2018

@fabriziopandini

This comment has been minimized.

Copy link
Contributor

fabriziopandini commented Jun 15, 2018

@ijc
FYI, the functional behavior of noTaintMaster is preserved but the implementation changed in the v1alpha2 API

@ashleyschuett
I have no strong opinions here, but I see no harms in the proposed change
/lgtm

/cc @luxas
For final approval

@timothysc timothysc added this to the v1.11 milestone Jun 15, 2018

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 15, 2018

[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process

@ashleyschuett @fabriziopandini

Pull Request Labels
  • sig/cluster-lifecycle: Pull Request will be escalated to these SIGs if needed.
  • priority/critical-urgent: Never automatically move pull request out of a release milestone; continually escalate to contributor and SIG through all available channels.
  • kind/bug: Fixes a bug discovered during the current release.
Help
@timothysc

This comment has been minimized.

Copy link
Member

timothysc commented Jun 15, 2018

/approve

thx @ashleyschuett !

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Jun 15, 2018

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ashleyschuett, fabriziopandini, timothysc

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 15, 2018

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot

This comment has been minimized.

Copy link
Contributor

k8s-github-robot commented Jun 15, 2018

Automatic merge from submit-queue (batch tested with PRs 64796, 65068). If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 9a4263d into kubernetes:master Jun 15, 2018

8 of 18 checks passed

Submit Queue Required Github CI test is not green: pull-kubernetes-e2e-gce
Details
pull-kubernetes-e2e-gce Job triggered.
Details
pull-kubernetes-e2e-gce-100-performance Job triggered.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job triggered.
Details
pull-kubernetes-e2e-kops-aws Job triggered.
Details
pull-kubernetes-integration Job triggered.
Details
pull-kubernetes-kubemark-e2e-gce Job triggered.
Details
pull-kubernetes-kubemark-e2e-gce-big Job triggered.
Details
pull-kubernetes-node-e2e Job triggered.
Details
pull-kubernetes-verify Job triggered.
Details
cla/linuxfoundation ashleyschuett authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gke Skipped
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Skipped
pull-kubernetes-typecheck Job succeeded.
Details
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.