Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

using layer2 is it possible to see which of the Kubernetes worker nodes is the primary? #494

Closed
nodesocket opened this issue Nov 13, 2019 · 7 comments
Labels

Comments

@nodesocket
Copy link

nodesocket commented Nov 13, 2019

First, huge thanks 👏 for MetalLB. I've been researching various LoadBalancer solutions when running Kubernetes on-premises and everything was horribly complicated until I came across MetalLB.

Using layer2 according to the documentation:

In layer 2 mode, one machine in the cluster takes ownership of the service

  • Is it possible to see which worker node in the Kubernetes cluster is the primary for MetalLB?
  • Secondly, if I want to perform scheduled maintenance on the primary worker node for MetalLB is there a way to manually switch to another Kubernetes worker node for MetalLB?
@jenciso
Copy link

jenciso commented Nov 15, 2019

In order to see the current worker node, I could use this command
kubectl describe svc name_service. E.g.

kubectl  describe svc -n istio-system istio-ingressgateway
...
External Traffic Policy:  Cluster
Events:
  Type    Reason        Age                From                Message
  ----    ------        ----               ----                -------
  Normal  IPAllocated   36m                metallb-controller  Assigned IP "10.64.13.130"

And to answer the second question.

First, I need to cordon the node (using kubectl cordon nodename) and then I should be move the pod using kubectl delete pod ....,

E.g.

[root@dcbvm090lb321 istio]# kubectl delete pod -n istio-system istio-ingressgateway-dfbdff6cc-l2jbm                                                                                                                
pod "istio-ingressgateway-dfbdff6cc-l2jbm" deleted                                                                                                                                                                 
[root@dcbvm090lb321 istio]# 
Events:
  Type    Reason        Age                  From                Message
  ----    ------        ----                 ----                -------
  Normal  IPAllocated   44m                  metallb-controller  Assigned IP "10.64.13.130"
  Normal  nodeAssigned  3m11s (x4 over 43m)  metallb-speaker     announcing from node "dcbvm090lb342.e-unicred.com.br"
  Normal  nodeAssigned  112s                 metallb-speaker     announcing from node "dcbvm090lb342.e-unicred.com.br"
  Normal  nodeAssigned  10s (x3 over 13s)    metallb-speaker     announcing from node "dcbvm090lb341.e-unicred.com.br"

@adampl
Copy link

adampl commented Nov 26, 2019

Well, what if you don't have istio?

@danderson
Copy link
Contributor

This has nothing to do with istio. If you kubectl describe service on any LoadBalancer service, there is an event recorded that specifies who the announcing node is.

You can't move the announcer ahead of scheduled maintenance. The assignment of the announce is done in a distributed fashion, so there's no central point that you can tell "okay switch now". Failover will happen during the node drain.

@anton-johansson
Copy link

@danderson, sorry to revive an old issue. Is there a technical limitation that makes it impossible to force MetalLB to change the announcing node? It feels a bit hassly to cordon, delete all pods on the node that are exposed with LoadBalancer, then perform the maintenance and then uncordon.

It would be really nice if you could tell MetalLB to avoid a node, maybe through an API? That way, I don't really need to find all the pods that I need to delete (it could be a few).

@anton-johansson
Copy link

@danderson, sorry for shameless bumps!

But I still feel it's a bit weird to have to delete perfectly fine pods in order to "fail over" the announced IP address. Is there any way that a "force" could be introduced?

@johananl
Copy link
Member

johananl commented Jun 8, 2020

It would be really nice if you could tell MetalLB to avoid a node, maybe through an API? That way, I don't really need to find all the pods that I need to delete (it could be a few).

@anton-johansson we can re-evaluate this. I think it would be best if you could open a new issue describing the desired functionality in as much detail as possible. This will help us figure out if it's reasonable to implement. If there is an alternative solution (e.g. kubectl drain) which kind of works but in your opinion isn't good enough, it would be very helpful if you could explain why such a solution doesn't work for you.

In general we do our best to ensure a requested functionality is generic (i.e. it doesn't address a very "specialized" use case) and can be integrated into MetalLB in a reasonable way, i.e. without interacting negatively with other features and/or radically changing the project's design.

@anton-johansson
Copy link

Thanks for the reply @johananl, I'll submit a new issue with the request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants