New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do you use MetalLB? Tell us! #5

danderson opened this Issue Nov 28, 2017 · 52 comments


None yet

danderson commented Nov 28, 2017

This is not an issue so much as a lightweight way of gathering information on who is using MetalLB. This is mostly to satisfy our curiosity, but might also help us decide how to evolve the project.

So, if you use MetalLB for something, please chime in here and tell us more!

@danderson danderson added the question label Nov 28, 2017


This comment has been minimized.

rsanders commented Dec 1, 2017

We're not using it yet, but our K8S clusters are 100% metallic, running in our data centers and on customer premises. We're going to be trying this out in the new year.

Our deployment would in most cases have this peer directly with our routers. We're a mostly Cisco shop. Which BGP implementations have you tested with?


This comment has been minimized.


danderson commented Dec 1, 2017

So far, I've only tested with OSS BGP implementations, mainly BIRD. I used to have some Juniper and Cisco hardware that could speak BGP, but I got rid of them :(.

With that said, currently MetalLB speaks a very conventional, "has existed for ever" dialect of BGP, so it should work fine. I foresee two possible issues interacting with Cisco's implementation:

  • Some old Cisco devices use an unusual encoding for capability advertisements in the OPEN message. AFAICT, they stopped doing this in the late 1990s, so it shouldn't be a problem. If it is a problem, it's a trivial patch to fix it.
  • Cisco may refuse to speak to a BGPv4 speaker that uses the original route encoding for IPv4 (which the RFCs say is still 100% correct), and instead requires all route advertisements to use the MP-BGP encoding. I'm in the middle of making MetalLB speak MP-BGP for dual-stack support anyway, so hopefully by the time you test MetalLB this should not be a problem.

If you hit any interop problems, please file bugs! Ideally with a pcap that captures the initial exchange of BGP OPEN messages and a few UPDATE messages, so that I can add regression tests.


This comment has been minimized.


xnaveira commented Dec 4, 2017

A colleague brought my attention to this project recently. We're running bare metal k8s on bgp enabled hosts and pods so this looks very interesting. Unfortunately we are using 32-bit AS numbers in our setup so we can't test it right away but it looks promising!


This comment has been minimized.


danderson commented Dec 5, 2017

@xnaveira MetalLB supports 32-bit AS numbers! It's the one BGP extension that's implemented in v0.1.0. v0.2.0 will also support MP-BGP, so if you use Quagga or Cisco devices, you'll want to wait for 0.2.0.


This comment has been minimized.


xnaveira commented Dec 5, 2017

That is awesome @danderson , i'll give it a try asap then.


This comment has been minimized.

tangjoe commented Dec 8, 2017

I saw a post in about MetalLB and this is what I had been looking for a long time for my local K8S on Mac. Took about one day following the tutorial and experiment, finally I have it up and running. Now I can deploy service in k8s with LoadBalancer without "pending". Awesome.


This comment has been minimized.

halfa commented Dec 8, 2017

We have successfully deployed a patched version of MetalLB on a staging K8S cluster at @naitways, peered with JunOS hardware appliances. Awesome! :)


This comment has been minimized.


danderson commented Dec 8, 2017

\o/ What patching did you need to interface with JunOS? I'm going to spin up a virtual JunOS later today and do interop testing, but it sounds like you already found the issues?


This comment has been minimized.

jacobsmith928 commented Jan 8, 2018

@danderson happy to support some bare metal and diverse switching (Juniper) + local and global BGP testing at Maybe take advantage of ?


This comment has been minimized.

zcourts commented Jan 14, 2018

We've just deployed MetalLB to our new test env. on a 1.9 K8s and eagerly watching #7 and #83. We've got a use case where any and all pods can be publicly routable so we're looking at having an IPv6 only cluster and each pod/svc getting IPv6.

We need non-HTTP svc so planning to use IPs to identify pods when SNI/TLS isn't available (of which we're launching with thousands this summer and expect that to grow to 10s of in 12-18 months.) Aligning some things with K8s 1.10 release for beta IPv6 support and will probably be running alpha versions of stuff in test leading up to launch.
FWIW we use OVH and each server comes with a /64 IPv6 block so when this is in, being able to draw on that pool from each K8s compute node will be ideal. As it stands we have no Go lang expertise but if we can contrib in any other way do let us know. We're comfy with C++/Java/Scala and I'll probably be learning Go this yr since we're committed to K8s.


This comment has been minimized.

hameno commented Jan 31, 2018

I just tried this in ARP mode in my k8s-cluster @ home. Works so far, thanks for this great project 👍
I may also deploy this at work in the future for an internal development cluster.


This comment has been minimized.

aphistic commented Feb 18, 2018

Just jumping in as well. I'm using ARP mode in my home lab with a cluster I set up following Kubernetes the Hard Way to learn Kubernetes. I'm using Weave Net to handle my cluster networking and running on XenServer VMs. I haven't gotten metallb running correctly yet but I'm working on it. :)


This comment has been minimized.

pfcurtis commented Feb 21, 2018

We are using MetalLB on a 30 node k8x cluster (internally) in production. So far, no issues.


This comment has been minimized.

pehlert commented Feb 22, 2018

Just wanted to say thank you for this project! I have spent hours trying to figure out several issues with keepalived-vip before stumbling across MetallLB.. Installed it in 5mins and it just works (and is a more elegant approach, too). Time will tell how stable it is, but no issues whatsoever so far!


This comment has been minimized.

ewoutp commented Mar 11, 2018

Running it on both a 3 OrangePI PC2 cluster and a 3 VM CoreOS cluster with k8s 1.9.
Works like a charm!
Love the ease of installation.


This comment has been minimized.

ChristofferNicklassonLenswayGroup commented Mar 21, 2018

Hi, we will use this when we will launch our ongoing k8s project.
and i am also using this on my cluster at home.
just love it :)


This comment has been minimized.

ebauman commented Mar 27, 2018

This project is lovely.

I'm using it on my home cluster, and plan to use it on my cluster at work.
We have a old-school deployment at my employer which doesn't afford me the flexibility to setup such niceties as BGP (or really any routing protocol that isn't EIGRP). Most everything that we do is layer 2 separated, so I felt left in the dust by people who got to just specify type: LoadBalancer and off they went.

This project makes k8s fit into my org.


This comment has been minimized.

joestafford commented Apr 2, 2018

Using this LB implementation in my home lab to learn k8s and look forward to using it in a project at work!


This comment has been minimized.

szabopaul commented Apr 8, 2018

Using this project on my homelab to facilitate Ansible configuration of single container pods is a breeze. Amazing work!


This comment has been minimized.

mpackard commented Apr 13, 2018

We are using this to try out Kubernetes on our own hardware. Thanks for creating it.


This comment has been minimized.

fsdaniel commented Apr 18, 2018

Running it against SRX and MX juniper hardware.

Thanks for making it!


This comment has been minimized.


uablrek commented Apr 19, 2018

I am evaluating metallb for use in "bare-VM" rather than bare-metal. There is however no difference, the problems are the same.
For testing I "install" metallb by starting the 2 binaries directly on the node vms, i.e not in pods. Due to the simple and elegant architecture this works perfectly fine.

I have learned a lot about Kubernetes by studying metallb. Thanks!


This comment has been minimized.

schmitch commented Apr 25, 2018

my company started trying out metallb.
we first tried to use it via layer 2 (we use calicoctl and had configured a bgp peer (that was not in use))
however only one client could connect to the service, we had no idea, why. maybe ARP packets were filtered somewhere.

I then removed the bgp peering from calico and used it with metallb which finally worked.
it's really cool to have a metallb.

however it's sad that metallb does not have some kind of LVS way of attaching IPs in layer2 mode, which won't use ARP requests, so it would be useful and less error prone in most networks.


This comment has been minimized.

michaelfig commented May 1, 2018

I'm getting my feet wet in Kubernetes at my company. Metallb has proven really useful for repeatable configuration of Ingresses (using nginx-ingress) on available layer2 IPs. Thanks for this useful software!


This comment has been minimized.


FireDrunk commented May 1, 2018

I'm running it at home, in my testing kubernetes cluster (which is running weave). It's connected to my pfSense router via BGP. This setup would be perfect in a datacenter setup with a specific IP space for DMZ.
If anyone wants the configs, I'd be happy to share.

Thanks for an awesome piece of software!


This comment has been minimized.

nrbrt commented May 7, 2018

I am running it in my home-lab and switched to Metallb after using Keepalived-vip for a while and not finding it stable enough. Keepalived-vip would work fine for a while and then lock up, forcing me to manually delete the "master" pod, after which things would start working again. I hope my worries are over now, using Metallb.


This comment has been minimized.

mxey commented May 9, 2018

We are using MetalLB in our on-premises cluster. At the moment, we run it in ARP mode, but we are working towards BGP mode with our Juniper switches.

MetalLB is really great to have, it alleviated a big part of the envy of Kubernetes on a cloud infrastructure :)


This comment has been minimized.

carldanley commented May 16, 2018

@FireDrunk I'd be curious to know what your configs are - I want to test BGP with my homelab cluster (+ pfsense)


This comment has been minimized.


FireDrunk commented May 18, 2018

My config (all done via web interface of pfSense):

# This file was created by the package manager. Do not edit!

AS 64512
fib-update yes
holdtime 90
listen on
network inet connected
group "kubernetes" {
	remote-as 64512
	neighbor {
		descr "k8s-master-001"
		announce all  
	neighbor {
		descr "k8s-node-001"
		announce all  
	neighbor {
		descr "k8s-node-002"
		announce all  
deny from any
deny to any
allow from
allow to
allow from
allow to
allow from
allow to

My metallb config:

- peer-address:
  peer-asn: 64512
  my-asn: 64512
- name: default
  protocol: bgp
  avoid-buggy-ips: true

My Kubernetes machines have, and my BGP subnet is, which could very well be something inside the same range.

I've found one flaw in this config, and that is that pfSense always routes to 1 of the hosts (whichever has come online the last I think). It doesn't automatically balance between routes with the same metric.
I've yet to play with BGP multipath.


This comment has been minimized.

Oded-B commented May 22, 2018

Currently in the process of deploying it on our infrastructure, 8 cluster around the globe, 600 bare metal nodes, peering against Calico node (patched, #114), Calico in turn peers again TOR switches(Arista/Cisco)

We are using it for datacenter internal IP ranges only as we need a way to limit user access to the external IP pools while keeping the service object(ClusterIP/LoadBalancer with internal IP pool) available to everyone.
We'll either setup some admission controller or use a transparent mode firewall so the IT/infosec people could have the final word on exposing a new service to the world.


This comment has been minimized.

taitd commented May 23, 2018

I've deployed metallb in layer2 mode to a kubespray 4 node kubernetes cluster, weave networked on terraformed vm's hosted on my home bare metal opennebula kvm IaaS which is running on 4 fanless pc's with 8-16GB and 4-8CPU on each host.

MetalLB is fantastic, I just installed the manifest and followed your LAYER 2 MODE TUTORIAL, then just like magic the LoadBalancer was assigned an external IP. Wow.

I learn and experiment, using all kubernetes features without the cost of expensive cloud based public ip's.


BigUp to all involved.


This comment has been minimized.

pimvanpelt commented May 27, 2018

Installed on my home k8s cluster - worked fine so far (layer2).


This comment has been minimized.

sfudeus commented Jun 5, 2018

We are in the process of switching all our on-premise bare-metal clusters which are not publicly exposed from a commercial load-balancer-platform to Metal-LB - and had no issues with it so far.
We are using BGP via ToR-Routers. Since we have enhanced needs in IPAM, we do not use auto-assignment of addresses but define them statically in the services. Future work would bring a dedicated controller doing our IPAM, which would reduce our use to just the speakers, not the controller anymore.
A big thank you to all authors, maintainers and contributors and especially @danderson, this was exactly what we were looking for atm.


This comment has been minimized.

stevemcquaid commented Jun 8, 2018

Am using metallb in homelab as well. Working fantastically in layer 2 (arp) mode!


This comment has been minimized.

ColdFreak commented Jun 8, 2018

We have our own cloud environment. When we install jenkins x in our Kubernetes cluster, we use metallb in order for the jenkins x to create a LoadBalancer type of service. Thanks for your work.


This comment has been minimized.

isaron commented Jun 11, 2018

We are using MetalLB in our k8s cluster in pre-production stage. Thanks for your great job!


This comment has been minimized.

5at commented Jul 7, 2018

installed in my home lab. worked first go!! (layer 2) great job!


This comment has been minimized.

mristok commented Jul 9, 2018

Using metallb in pre-production cluster (layer2), and it is working great. We had no problems implementing, and the O&M has been effortless.


This comment has been minimized.

kelvinatstreetline commented Jul 18, 2018

We are using metallb in layer-2 mode for on-premise K8 testing.


This comment has been minimized.

RdL87 commented Aug 8, 2018

Hello, we are testing metallb solution on our kubernetes cluster (
Great job! We are very interested in having this solution in production.
@albebeto @alexbarchiesi


This comment has been minimized.

vishvish commented Aug 14, 2018

I've just finished my PoC proving that I can cluster a legacy app on bare metal k8s using MetalLB!


This comment has been minimized.

bradfitz commented Aug 17, 2018

Installed in my homelab in BGP mode.

(You know this, but joining the happy user list. :))


This comment has been minimized.

barthoda commented Aug 28, 2018

We are running some experiments with this at


This comment has been minimized.

travisghansen commented Aug 31, 2018

I'm using this in various capacities and it's worked great in each. @FireDrunk one interest I have is putting together some glue between this and pfsense to automatically update the openbgp config based off of nodes coming/going. If you have any input to provide or are interested in the same idea let me know.


This comment has been minimized.

rhydian76 commented Sep 19, 2018

Just started using MetalLb for bare metal K8s in test & dev environments. Really positive experience so far. Love the fact that its so simple compared to K8s ingresses - which we found were awkward for bare metal K8s clusters. Watching this project with interest with a view to rolling out to production clusters.


This comment has been minimized.

travisghansen commented Sep 19, 2018

For anyone using pfSense I've created a controller that integrates with metallb to automatically update neighbors:


This comment has been minimized.

Fufuhu commented Sep 28, 2018

I use metallb in myhome lab in L2 mode.
This is so easy to use.
With this, I could make use of my k8s cluster at home.


This comment has been minimized.

n8behavior commented Oct 1, 2018

Can't believe I'm just learning about this project. I use MaaS to provision metal servers to deploy various clusters. I'm super excited to set up MetalLB this weekend with our lab at -- we're just getting started and this will be a great addition.


This comment has been minimized.

kmaris commented Oct 11, 2018

I've been testing it in my home lab and in the workplace. We're just starting so no real impressions as yet. Will update with more when I've got some more experience with it.


This comment has been minimized.

ErmakovDmitriy commented Oct 21, 2018

I am going to test MetalLB in my home lab environment for studying.
Thank you for your work!


This comment has been minimized.

eknudtson commented Oct 30, 2018

We just setup MetalLB on our k8s cluster.

We use routing to the host w/ BGP unnumbered via Free Range Routing and Cumulus Linux.

We use Calico CNI in policy only mode, and rely on FRR on the host to advertise both the host routes and pod routes into our spine/leaf network.

We updated our MetalLB pod daemonset to use pod networking instead of host networking. We then have MetalLB peer with the host FRR using dynamic BGP peers (and a peer + filter for each host in the MetalLB configmap). This allows MetalLB to announce its IPs into the routing mesh without needing to peer with TOR leaves.

Working great so far! With externaltrafficpolicy: local, we're able to avoid source IP problems by sending traffic only to nodes with service pods/endpoints on them.


This comment has been minimized.

tmadrms commented Nov 12, 2018

Thanks for putting this out in the world, exactly what I was looking for for my test cluster!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment