Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

implement issue-19 add prefer no schedule taint to avoid double draining of pods #250

Merged
merged 9 commits into from
Jan 12, 2021

Conversation

damoon
Copy link
Contributor

@damoon damoon commented Nov 28, 2020

This implements #19

The idea is to set a prefer no schedule taint.
The taint is set when the reboot sentinel is found and removed after the restart.

In case the node can not restart immediately the taint asks the kubernetes scheduler to avoid placing new pods on the tainted node.
But this is only preferred, as a last resort the kubernetes scheduler does place pods onto the node.

@damoon
Copy link
Contributor Author

damoon commented Nov 29, 2020

The test is failing because the taint code uses the update method, but RBAC does only include patching nodes.

@damoon
Copy link
Contributor Author

damoon commented Nov 29, 2020

it works now 😃
someone wants to review it?

Copy link
Collaborator

@evrardjp evrardjp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

First, allow me to thank you for this PR! Well thought out.

Please see my comments inline for the first part of the review (maybe a comment, and a little code change?) on the preferNoSchedule code location/behaviour.
Finally, I see a problem with the PR right now (maybe I am mistaken, could you clarify?):
This PR changes the state of the taint TaintEffectPreferNoSchedule regardless of not it was tained by kured. This is a problem for me, because one administrator might want to mark a node for NoSchedule for other reasons, and we might remove it without its approval. WDYT?

cmd/kured/main.go Outdated Show resolved Hide resolved
cmd/kured/main.go Show resolved Hide resolved
@damoon
Copy link
Contributor Author

damoon commented Nov 30, 2020

This PR changes the state of the taint TaintEffectPreferNoSchedule regardless of not it was tained by kured. This is a problem for me, because one administrator might want to mark a node for NoSchedule for other reasons, and we might remove it without its approval. WDYT?

Only the taint "weave.works/kured-node-reboot" (by default) is added or remove by the new behavior. Other taints will be untouched. The code from https://github.com/damoon/kured/blob/19-PreferNoSchedule/cmd/kured/main.go#L326 until https://github.com/damoon/kured/blob/19-PreferNoSchedule/cmd/kured/main.go#L367 ensures only to add/remove this one taint.
TaintEffectPreferNoSchedule is the kind of taint. Alternatives are NoSchedule and NoExecute.

@damoon
Copy link
Contributor Author

damoon commented Nov 30, 2020

I was not sure who the code was organized.
Should this get its own package in https://github.com/weaveworks/kured/tree/master/pkg ?

@damoon damoon force-pushed the 19-PreferNoSchedule branch 4 times, most recently from 0f60dba to bd21661 Compare November 30, 2020 18:06
@damoon damoon requested a review from evrardjp December 1, 2020 10:32
Copy link
Collaborator

@evrardjp evrardjp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not tested it, but I like it.

@evrardjp
Copy link
Collaborator

evrardjp commented Dec 1, 2020

CI tested it, so I guess that's a good start.

Should we extend smoke test to ensure that we can schedule a basic workload? (And maybe ensuring the behaviour of this PR?)

@damoon
Copy link
Contributor Author

damoon commented Dec 1, 2020

I tested the parallel reboot of nodes and the taint creation/removal.
(But this happened before moving the code to its own package.)
I would feel more comfortable, if someone else would check it too.

@damoon
Copy link
Contributor Author

damoon commented Dec 1, 2020

i found an issue:
rebootBlocked checks prometheus for alerts and does not indicate an other node has the lock
i moved adding the taint into the !acquire block.

once the testing is done is push the fix.

@damoon
Copy link
Contributor Author

damoon commented Dec 1, 2020

fixed and tested. works for me.

Copy link
Contributor

@bboreham bboreham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left some minor comments below.

It would be good to mention this feature in the README.

I did not really follow the test / add patch stuff. Do you have a reference to some documentation explaining it, or another program that does the same? (kubectl taint does not appear to do it).

Also I think there is another corner case: if rebootRequired() goes from true to false the taint will not be removed until the end of the reboot window, or until restart of kured.

pkg/taints/taints.go Outdated Show resolved Hide resolved
pkg/taints/taints.go Outdated Show resolved Hide resolved
cmd/kured/main.go Show resolved Hide resolved
cmd/kured/main.go Outdated Show resolved Hide resolved
@damoon
Copy link
Contributor Author

damoon commented Jan 6, 2021

I updated the parameter section in README.

For patches I found this https://dwmkerr.com/patching-kubernetes-resources-in-golang/ example and for test i found http://jsonpatch.com/#test.
Test is needed to ensure the list of taints did not change on server side while the offset to edit was processed locally.
I believe kubectl uses update instead of patch and avoids this problem with a optimistic lock.
This implementation chose patch because patch was already part of the RBAC profile while update node is missing.

I changed the behavior in case rebootRequired() switches from true to false to remove the taint.

Copy link
Contributor

@bboreham bboreham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.
Thanks for all changes, and for the explanation of why it is done this way.

Copy link
Collaborator

@evrardjp evrardjp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added a question/comment to clarify our goals. I would prefer if this is clarified before merging. However, I don't think it's worth blocking this PR from merging if we don't get consensus.

Please note that I didn't test this patch. The code looks okay to me.

}

if rebootBlocked(client, nodeID) {
continue
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the reboot is Blocked by a pod or prometheus, shouldn't we still unmark the node?
For me it means that this node was voluntarily put out of the loop by an administrator, even if the reboot is required during the maintenance window.

In this case, I think that rebootBlocked should also remove any existing NoSchedule taint.
On the other hand, one could argue that a node "out of the loop" shouldn't be touched at all.

In any case, this decision should be recorded somewhere (in a comment, or in the commit message).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Being blocked by prometheus can be from a "pending" alert: in my case I see those come and go quite often.
So I think it would be bad to oscillate the taint; I would say once you need to reboot you shouldn't be scheduling new pods onto that node. (But it's only "preferred" so may get some if necessary).

Maybe we should make the taint "" by default, so people can choose to try out the feature and report back?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am fine with both parts of your answer.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@damoon do you mind updating the taint to "" by default?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alternatively, we can merge this after 16.0, and I can do another PR with updating the default taint.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed the flag to be disabled by default.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about the rebootBlocked() cases.

Situation: A node that waits for a long time to reboot, but is blocked by other rebooting nodes. Then a pod, that blocks reboots, is scheduled onto that node.
A) The taint is remove. As a result even more work gets scheduled here, and needs to be drained again. This should be avoided.
B) The taint stays. As a result the nodes without the taint get a higher average load, even so this node can not reboot once it is its turn to do so. This should also be avoided.
The same should be true for prometheus alert.

My feeling is, to leave a node, that wants to reboot, but is blocked to do so, tainted. It expresses the intend best.
As mentioned, the taint is "only" preferred, and in case B i trust the kubernetes scheduler and resource requests enough for uneven load not be a real issue.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree.

@evrardjp evrardjp added this to the 1.7.0 milestone Jan 11, 2021
Copy link
Contributor

@bboreham bboreham left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks!

@dholbach
Copy link
Member

Nice work and thanks for hanging in there.

Do you think you could add a note #295 about this feature?

@dholbach dholbach merged commit fade706 into kubereboot:master Jan 12, 2021
@dholbach dholbach linked an issue Jan 12, 2021 that may be closed by this pull request
@ckotzbauer ckotzbauer mentioned this pull request May 19, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

taint node with PreferNoSchedule until it gets drained
4 participants