Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow for arbitrary labeling / tainting of worker node pools #3

Closed
timoreimann opened this issue Jul 11, 2019 · 13 comments
Closed

Allow for arbitrary labeling / tainting of worker node pools #3

timoreimann opened this issue Jul 11, 2019 · 13 comments
Labels
enhancement New feature or request

Comments

@timoreimann
Copy link
Contributor

Copying from this DO Idea:

As a user, I’d like to be able to add arbitrary (Kubernetes) labels to my worker node pools that are part of my DOKS cluster. Currently, tags on the pool are insufficient as they are not automatically synced, and the validation rules are generally different than those expected by Kubernetes labels. Labeling support is important for advanced scheduling / placement decisions in Kubernetes. Manually labeling nodes is infeasible as any new nodes in the pool will not contain the same base set of labels. During events such as the recent CVE fix, all nodes were replaced, and workloads depending on nodes with certain labels existing were unable to be scheduled.

See existing thread at digitalocean/digitalocean-cloud-controller-manager#136 for more information.

@timoreimann
Copy link
Contributor Author

/cc @normanjoyner

@normanjoyner
Copy link

Thanks for the heads up @timoreimann! Love that these will be tracked here!

@timoreimann
Copy link
Contributor Author

timoreimann commented Aug 5, 2019

A few users have expressed desire to apply taints in a similar fashion. Noting it down here so we keep track of it as it seems fairly related to labels.

@timoreimann timoreimann changed the title Allow for arbitrary labeling of worker node pools Allow for arbitrary labeling / tainting of worker node pools Sep 28, 2019
@timoreimann
Copy link
Contributor Author

timoreimann commented Sep 28, 2019

Based on customer feedback we have collected, it looks like a reasonable place to start providing support for this feature is at the node pool level: that is, allow users to specify labels and taints that apply to all nodes belonging to the same node pool. Nodes being added to that pool as part of an upscale would receive the configured labels / taints right from the start. Similarly, changes to a node pool's labels / taints would propagate to all nodes automatically.

The ability to label / taint individual nodes in a persistent way doesn't seem to be a dominant use case, presumably because of the immutable / interchangeable nature of worker nodes. This would also be more difficult to implement in DOKS as of now since node names change across node recycles and rolling updates.
That said, if users have a real need for this particular use case (and it could not be satisfied with node pool labels / taints), we'd love to hear from you.

@timoreimann
Copy link
Contributor Author

timoreimann commented Feb 4, 2020

We just shipped support for persistent node pool labels: you can now associate one or more labels to a DOKS node pool and watch them persist on any nodes that belong to that pool now or in the future. This should make it easier to manage DOKS nodes using Kubernetes label selectors.

The feature can be accessed by accessing the DigitalOcean Kubernetes API directly (see also our change log update), through godo v1.30.0+, and doctl v1.38.0+. It works on any DOKS cluster version. (Note that labels cannot be set via the DigitalOcean cloud control panel yet; work for that is in the making.)

@normanjoyner
Copy link

Awesome, thanks for the update @timoreimann. And many thanks to everyone who made this possible; this is a super useful feature! 🙏

@BartOtten
Copy link

@timoreimann Having labels helps in tainting the nodes manually, but I don't see how we can taint nodes in a pool automatically (using scaling). Any advice?

@timoreimann
Copy link
Contributor Author

👋 @BartOtten

Persistent node (pool) taints are still in the making. The best workaround for today is to build/run something that watches over nodes and ensures that taints are set accordingly.

We'll update the ticket as we finalize the work by supporting taints properly as well.

@tombh
Copy link

tombh commented Aug 3, 2020

Has anyone tried @timoreimann suggestion? Any recommendations of gotchas? Is the idea to say, have a cron job somewhere that runs kubectl taint [list of node IDs] every minute or something?

@shrumm
Copy link

shrumm commented Aug 11, 2020

@tombh you can have a look at https://github.com/DataCueCo/do-node-tainter

I setup a cronjob that watches for nodes that aren't tainted and taints them. Should tide folks over till the official feature is released. Feedback welcome.

@Gallardo994
Copy link

I second this, would be very useful e.g. if you deploy applications like databases onto specific nodes and don't want any other pods to deploy onto those nodes.

Additional idea: add CRD that pings DO's API and updates cluster according to nodepool tags (some prefix as a taint?).

@timoreimann
Copy link
Contributor Author

Quick update: support for persistent node pool taints is making good progress and should be available soon.

@timoreimann
Copy link
Contributor Author

I'm happy to announce that persistent node pool taints are now publicly available. 🎉

All currently available DOKS cluster versions are supported. To associate a taint with all nodes of a given pool, set the taints field on a node pool resource in the REST API. Alternatively, you can use a recent version of our doctl CLI to conveniently set taints during cluster creation, node pool creation, and node pool updates. (UI integration is yet to come.) See our change log update for pointers and examples.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants