-
Notifications
You must be signed in to change notification settings - Fork 894
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
using layer2 is it possible to see which of the Kubernetes worker nodes is the primary? #494
Comments
In order to see the current worker node, I could use this command
And to answer the second question. First, I need to cordon the node (using kubectl cordon nodename) and then I should be move the pod using E.g.
|
Well, what if you don't have istio? |
This has nothing to do with istio. If you You can't move the announcer ahead of scheduled maintenance. The assignment of the announce is done in a distributed fashion, so there's no central point that you can tell "okay switch now". Failover will happen during the node drain. |
@danderson, sorry to revive an old issue. Is there a technical limitation that makes it impossible to force MetalLB to change the announcing node? It feels a bit hassly to cordon, delete all pods on the node that are exposed with It would be really nice if you could tell MetalLB to avoid a node, maybe through an API? That way, I don't really need to find all the pods that I need to delete (it could be a few). |
@danderson, sorry for shameless bumps! But I still feel it's a bit weird to have to delete perfectly fine pods in order to "fail over" the announced IP address. Is there any way that a "force" could be introduced? |
@anton-johansson we can re-evaluate this. I think it would be best if you could open a new issue describing the desired functionality in as much detail as possible. This will help us figure out if it's reasonable to implement. If there is an alternative solution (e.g. In general we do our best to ensure a requested functionality is generic (i.e. it doesn't address a very "specialized" use case) and can be integrated into MetalLB in a reasonable way, i.e. without interacting negatively with other features and/or radically changing the project's design. |
Thanks for the reply @johananl, I'll submit a new issue with the request. |
First, huge thanks 👏 for MetalLB. I've been researching various
LoadBalancer
solutions when running Kubernetes on-premises and everything was horribly complicated until I came across MetalLB.Using layer2 according to the documentation:
The text was updated successfully, but these errors were encountered: