Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement BGP add-path #1

Open
danderson opened this issue Nov 27, 2017 · 13 comments
Open

Implement BGP add-path #1

danderson opened this issue Nov 27, 2017 · 13 comments

Comments

@danderson
Copy link
Contributor

Currently MetalLB suffers from the same load imbalance problems as GCP when using externalTrafficPolicy: Local, because BGP ends up load-balancing by node, regardless of the pod count on each node.

One possible solution is to implement BGP add-path (RFC 7911) in MetalLB's BGP speaker, and make speakers advertise one distinct path for each pod running on a given node. This would effectively push weight information upstream, and compatible routers should therefore weight their traffic distribution.

Annoyingly, the spec doesn't make it clear that routers are expected to translate multiple distinct paths with the same next-hop as a weighted assignment in their FIB. It would make sense naively, but it may not be what people implementing the spec had in mind in terms of use cases. So, we'll have to implement a prototype and see how BIRD and others behave, and if they behave sensibly, we can productionize the change.

@danderson danderson self-assigned this Nov 27, 2017
@halfa
Copy link

halfa commented Dec 7, 2017

Open question: why are you re-implementing a bpg library specifically for MetalLB instead if using something like GoBGP ? I've done a PoC running it against JunOS routers and encountered issue on the capabilities advertisement.

@danderson
Copy link
Contributor Author

Good question! The initial prototype used gobgp, but I kept having issues and headaches with it. The API surface is completely undocumented (both the Go API and the gRPC API), and is completely un-idiomatic Go (frankly it feels like python that was ported word for word into Go). So when I (frequently) encountered strange behaviors, it was a chore to debug.

Aside from that, GoBGP also implements way more stuff than MetalLB needs. It wants to be a real router, implementing the full BGP convergence algorithm and all that. All that behavior is actively harmful to what MetalLB wants to do (which is just push a couple of routes and ignore everything the peer sends). It's possible to make it work, by segmenting things into VRFs, adding policies to drop all inbound routes, ... But at that point I'm writing significant amounts of code, against an undocumented library that implements 10x what we need, to trick it into not doing 99% of what it tries to do by default. That's a lot of additional complexity just to serialize and transmit an update message.

With that said, there's obviously downsides as well, and the big one is that OSRG has money to buy vendor gear and do interop testing on their quirky BGP stacks, and I don't.

What version of MetalLB did you use for your PoC? Yesterday I updated the manifests to point to a version that should resolve the main source of unhappy peers (not advertising MP-BGP extensions), so I'm curious if this makes JunOS happy as well. If it doesn't, any chance you could file a bug with a pcap of the BGP exchange?

@pdf
Copy link

pdf commented Apr 20, 2018

Correct me if I'm wrong, but without add-path support, aren't we in a situation where there is actually no load-balancing done when externalTrafficPolicy: Local is specified? Because MetalLB only announces a single next-hop, and expects k8s to perform the actual load-balancing, whereas k8s expects load-balancers to distribute traffic accross service group members, especially with externalTrafficPolicy: Local?

@danderson
Copy link
Contributor Author

That's not accurate. Without add-path, the router receiving multiple advertisements can still choose to do multipath routing between all peers. This is because each path is distinct, in terms of BGP's decision algorithm, because each one comes from a different router ID.

You still need a router capable of doing multipath (~all of the professional ones, and most of the entry level ones), and you have to explicitly enable that behavior (otherwise it will just pick one of the paths and only use that one - this is true even with add-path enabled).

The issue we have without add-path is that the traffic distribution is on a per-node basis, not a per-pod basis. So if you have multiple pods for a service on a single node, it can lead to traffic imbalances, as explained in more detail at https://metallb.universe.tf/usage/#local-traffic-policy.

I should add, and maybe this is the source of confusion: in BGP mode, each speaker is only advertising itself as a nexthop, it's not attempting to advertise the routes on behalf of other machines. So, from the external router's perspective, it sees N routers, each advertising routes that point to themselves - not N routers all advertising the exact same thing.

@pdf
Copy link

pdf commented Apr 20, 2018

I should add, and maybe this is the source of confusion: in BGP mode, each speaker is only advertising itself as a nexthop, it's not attempting to advertise the routes on behalf of other machines.

Indeed, this contributes to my confusion - what happens in the case where you have externalTrafficPolicy: Local services to route that exist on nodes that aren't running MetalLB speakers?

@danderson
Copy link
Contributor Author

Those pods/nodes won't get any traffic. When using the Local traffic policy, each node is responsible for attracting traffic to itself, if it contains pods for the relevant service.

In fact, this is almost exactly the same as the Cluster traffic policy behavior: each node is responsible for attracting its own traffic, the only difference is that in Cluster mode the attraction is unconditional, and in Local mode there must be at least 1 service pod on the machine.

Once we have add-path, I guess we could revisit that topology. However, there is one big benefit to having each node the only one responsible for attracting its own traffic: if the node fails, the BGP session breaks immediately, and traffic stops flowing to the broken node without having to wait for k8s to converge and notice that the pods are unhealthy. This greatly speeds up draining bad nodes (~0s vs. tens of seconds for a gray failure like power loss).

@pdf
Copy link

pdf commented Apr 20, 2018

Those pods/nodes won't get any traffic. When using the Local traffic policy, each node is responsible for attracting traffic to itself, if it contains pods for the relevant service.

Right, makes sense now that I understand how this is designed to work (and indeed, I hadn't enabled multipath on the upstream routers in my lab so I was only seeing a single route, derp).

In fact, this is almost exactly the same as the Cluster traffic policy behavior: each node is responsible for attracting its own traffic, the only difference is that in Cluster mode the attraction is unconditional, and in Local mode there must be at least 1 service pod on the machine.

The other difference, I believe, between Local and Cluster is that Cluster traffic will still arrive at a node that can handle the traffic even if that node is not a speaker, since k8s will NAT it to get there.

there is one big benefit to having each node the only one responsible for attracting its own traffic: if the node fails, the BGP session breaks immediately

I can see logic in this approach for fault-detection, vs waiting for k8s to detect problems, though I do wonder if this coupling is always ideal.

Thanks kindly for the explanations, everything makes perfect sense now.

@danderson
Copy link
Contributor Author

You're correct about the additional behavior of the Cluster policy. Courtesy of kube-proxy, it will do "perfect" even distribution across all service pods, at the cost of rewriting the source IP when it NATs.

As for the coupling of speaker and node... I agree, it's a tradeoff. I picked the one I picked mostly arbitrarily, based on past experiences at Google where this behavior was beneficial. But other options are just as valid, as illustrated in #148 for example - in that bug, the person effectively wants the BGP state to reflect what k8s thinks at all times, regardless of the health of individual speaker pods. It's a different set of tradeoffs, in that some failure modes become better and others become worse.

The good news is, when we get add-path, we suddenly have more choices as to which behavior we want... Although I worry about offering too many configuration options, I already have a hard time testing all combinations sufficiently :)

@bjornryden
Copy link

If I understand this correctly, I see a problem with add-path in regards to how many routes are going into the receiving routers. At a very quick glance the limitations in ECMP routes per destination ranges from 16 (older custom silicon from C/J) to 1024 (Trident 2+).
C/J both implements a weighted ECMP solution based on BGP communities, which would reduce the amount of routes needed to nodes (with active pods) and not pods.
Juniper implementation:
https://www.juniper.net/documentation/en_US/junos/topics/example/bgp-multipath-unequal.html
Cisco implementation:
https://supportforums.cisco.com/t5/service-providers-documents/asr9000-xr-understanding-unequal-cost-multipath-ucmp-dmz-link/ta-p/3138853
IETF Draft:
https://tools.ietf.org/html/draft-ietf-idr-link-bandwidth-07

@danderson
Copy link
Contributor Author

Yup, ECMP group size is a concern. The 100% fix for that is WCMP (weighted cost multipath), which the silicon typically supports but the upper layer protocols don't really.

Thanks for the reference on Cisco and Juniper's approach to this! The RFC raises some questions for me, so I'll have to consult with a couple of networking friends... But it seems plausible to implement.

Regarding ECMP group size: IIRC, Broadcom Scorpion, the predecessor of Trident, already supported 128-way ECMP, and later generations of silicon go up to 512 or even 1024-way ECMP. So, for enterprises that are using even semi-modern hardware, the group size shouldn't be a huge issue. Homelabbers and people using lower-tier hardware may indeed have issues, however, I agree. There's not much I can do about that, except offer the option of using community+bandwidth balancing for people who have compatible hardware.

@CJP10
Copy link

CJP10 commented Dec 20, 2018

I am curious if removing additional paths that were added by add-path have any significant downside.

Assuming they don't, even capping the paths at 16 by default to support older hardware would greatly improve load distribution.

My initial thought is have the node with the most pods allocate 16 paths and the node with the least allocate a single path. All the other nodes, fall in between based off their distribution between the two.

Repeat this process when the distribution of pods changes and this should add some sort of "weighted" routing

@rchenzheng
Copy link

Any news on fixing this?

@Jean-Daniel
Copy link

Jean-Daniel commented Aug 9, 2022

Now that metallb switched to FRR, it may be implemented using weighted ECMP. This is an use case FRR is explicitly supporting:

http://docs.frrouting.org/en/stable-7.5/bgp.html#weighted-ecmp-using-bgp-link-bandwidth

One of the primary use cases of this capability is in the data center when a service (represented by its anycast IP) has an unequal set of resources across the regions (e.g., PODs) of the data center

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
BGP mode
  
To Do
Development

No branches or pull requests

8 participants