New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Replace the MetalLB configmap with k8s custom resources #196
Comments
[ Quoting <notifications@github.com> in "[google/metallb] Replace the MetalL..." ]
MetalLB currently uses a ConfigMap for its configuration. This is easy, but not very kubernetes-native.
How does this look from the kubectl side? I quite like the simplicity a
ConfigMap gives me. Is this still true for custom resources?
That "forking the k8s go client" makes this look like a non starter though....
/Miek
…--
Miek Gieben
|
in k8-ipcontroller they are controlled with kubectl
|
Kubectl has some automagic support for handling CRDs, so it would probably look something like:
Which would let you edit something that looks like: apiVersion: metallb.universe.tf/v1
kind: AddressPool
metadata:
name: my-addr-pool
spec:
protocol: layer2
cidr:
- 192.168.16.240/30 Exact content of each CRD is to be figured out still, but that's approximately what things would look like. Agreed that a configmap makes things simpler by putting everything in one place... But that also means we lack handy objects to attach things like event logs. Maybe that's fine and we should just do structured logging in MetalLB instead, I don't know. |
Allowing ranges without being cidr would be sweet as well. Sometimes ranges are broken up because of technical debt. |
@danderson
Example Custom Resource
example-ippool-bgp
example-bgppeer
|
I like the IPPool CRD you're suggesting. Some potentially small naming questions: Is there a reason you used the name IPPool instead of the MetalLB configuration parameter AddressPool? BGP and layer2 are referenced in the documentation as modes. Is the word "protocol" more descriptive to you? I'm confused about the other CRDs though. Can you share use cases where you would use the CRD to create/modify a BGPAdvertisement or BGPPeer? That seems like something we can trust metallb's BGP code to do for us. |
The other CRDs are mirroring parts of MetalLB's configuration, for setting up BGP peers and configuring how BGP advertisements are computed from addresses. I'm not convinced that it's worth separating BGPAdvertisement into its own CRD. It cleans stuff up a little bit by letting multiple address pools reference the same advertisement configuration... But that's already a feature that has, AFAIK, approximately zero users, and breaking it out adds significant extra complexity because the code now has to deal with keeping these two data structures in sync given k8s's eventually consistent announcements. Aside from that, yeah, that's roughly the shape I was expecting. +1 to Alex's naming suggestions. Also be aware that I have no time to review PRs in this direction, so don't expect much activity if you start sending PRs :( |
Closing this as superseeded by #942 |
@fedepaol maybe time to update this doc page? : https://metallb.universe.tf/faq/ |
@BloodyIron the release is not out yet, when metallb with crds will released we will update the docs |
Oh I thought I read that this was merged into master a while ago, and I came across the doc page pointing here... but sounds like you're on top of it, just wanted to help :P |
Sure, and thanks for that! |
MetalLB currently uses a ConfigMap for its configuration. This is easy, but not very kubernetes-native.
We should consider migrating the config to custom resource definitions. Looking at our config today, we would have ~3 custom resources:
Major advantage of custom resources is that Kubernetes users already know how to manipulate them, and it gives us a great attachment point for event logs (peer connected, address allocated...). This lets us inject most of what's in MetalLB logs today into the standard kubernetes event stream. It also lets us use webhooks in a more natural way, where we can allow/deny changes to individual subresources of the config.
Major disadvantage is that CRDs are a nightmare to work with in the k8s go-client, we basically have to compile our own fork of go-client to get custom type definitions, unless things have changed for the better.
Thoughts welcome on this one, I'm conflicted about whether we should do this.
The text was updated successfully, but these errors were encountered: