Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support IPv6 loadbalancer services #179

Closed
flokli opened this issue Jun 19, 2021 · 12 comments
Closed

Support IPv6 loadbalancer services #179

flokli opened this issue Jun 19, 2021 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@flokli
Copy link
Contributor

flokli commented Jun 19, 2021

Right now, this only sets up IPv4 BGP in the API: https://github.com/equinix/cloud-provider-equinix-metal/blob/master/metal/bgp.go#L218

It should set up both IPv4 and IPv6 peers (behaviour could possibly be made configurable).

MetalLB seems to support IPv6 sufficiently enough, (both multiprotocol BGP and IPv6 LoadBalancer services).

@deitch
Copy link
Contributor

deitch commented Jun 21, 2021

This is a reasonable request. There are a few parts to this, though; it is a bit of a lift.

  • enabling BGP for ip6 at the project level
  • enabling BGP for ip6 at the node level (which is what you linked to above)
  • ensuring that the routines that get BGP node peers return ip6 and ip4
  • consuming both when adding or syncing or removing nodes here
  • updating the IP reservations to request both an ip4 and an ip6
  • updating the kubernetes Service.Spec to provide an ip6
  • updating the interface to implementations to support it
  • updating the metallb driver to support it

The Service.Spec part has me confused. Normally, you use Service.Spec.LoadBalancerIP, but that is normally ip4. Or is it potentially either? Can one specify both?

I think @detiber or @displague is most likely to know?

@flokli
Copy link
Contributor Author

flokli commented Jun 21, 2021

Edit: Service.Spec.LoadBalancerIP is still single-stacked right now (so we could only set this for ipv{4,6}-only services).
Not sure if the behaviour for dual-stacked services is defined.

kubernetes/enhancements#1992 tracks the addition of Service.Spec.LoadBalancerIPs.

@flokli
Copy link
Contributor Author

flokli commented Jun 21, 2021

I assume for now people will need to define one service with .spec.ipFamilies = ["IPv4"] and one with .spec.ipFamilies = ["IPv6"].

@displague
Copy link
Member

https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.21/#servicespec-v1-core

The service spec has a few fields that jump out:

ipFamilies (1.20+) is a list of IP families (e.g. IPv4, IPv6) assigned to this service, and is gated by the "IPv6DualStack" feature gate. This field is usually assigned automatically based on cluster configuration and the ipFamilyPolicy field. If this field is specified manually, the requested family is available in the cluster, and ipFamilyPolicy allows it, it will be used; otherwise creation of the service will fail. This field is conditionally mutable: it allows for adding or removing a secondary IP family, but it does not allow changing the primary IP family of the Service. Valid values are "IPv4" and "IPv6". This field only applies to Services of types ClusterIP, NodePort, and LoadBalancer, and does apply to "headless" services. This field will be wiped when updating a Service to type ExternalName. This field may hold a maximum of two entries (dual-stack families, in either order). These families must correspond to the values of the clusterIPs field, if specified. Both clusterIPs and ipFamilies are governed by the ipFamilyPolicy field.

ipFamilyPolicy (1.20+) represents the dual-stack-ness requested or required by this Service, and is gated by the "IPv6DualStack" feature gate. If there is no value provided, then this field will be set to SingleStack. Services can be "SingleStack" (a single IP family), "PreferDualStack" (two IP families on dual-stack configured clusters or a single IP family on single-stack clusters), or "RequireDualStack" (two IP families on dual-stack configured clusters, otherwise fail). The ipFamilies and clusterIPs fields depend on the value of this field. This field will be wiped when updating a service to type ExternalName.

The IPv6DualStack feature gate became beta (enabled by default) in 1.21, and has been alpha since 1.15.

More details and examples at https://kubernetes.io/docs/concepts/services-networking/dual-stack/


In older versions:

ipFamily (1.16-1.19) specifies whether this Service has a preference for a particular IP family (e.g. IPv4 vs. IPv6) when the IPv6DualStack feature gate is enabled. In a dual-stack cluster, you can specify ipFamily when creating a ClusterIP Service to determine whether the controller will allocate an IPv4 or IPv6 IP for it, and you can specify ipFamily when creating a headless Service to determine whether it will have IPv4 or IPv6 Endpoints. In either case, if you do not specify an ipFamily explicitly, it will default to the cluster's primary IP family. This field is part of an alpha feature, and you should not make any assumptions about its semantics other than those described above. In particular, you should not assume that it can (or cannot) be changed after creation time; that it can only have the values "IPv4" and "IPv6"; or that its current value on a given Service correctly reflects the current state of that Service. (For ClusterIP Services, look at clusterIP to see if the Service is IPv4 or IPv6. For headless Services, look at the endpoints, which may be dual-stack in the future. For ExternalName Services, ipFamily has no meaning, but it may be set to an irrelevant value anyway.)

@deitch
Copy link
Contributor

deitch commented Jun 21, 2021

@flokli I just spent time going through that enhancement proposal. As far as I can tell, it looks like loadBalancerIPs will not be implemented.

I think that the simplest first step here is to support either ipv6 or ipv4 in the CCM. Let it look at the requested family and get bgp for that specific family. In the future, we can look at single Service dual-stack support.

Is there an easier way?

@flokli
Copy link
Contributor Author

flokli commented Jun 21, 2021

Yes, the KEP seems to be stalled. Thanks for your follow-up question there!

I think that the simplest first step here is to support either ipv6 or ipv4 in the CCM. Let it look at the requested family and get bgp for that specific family.

Yeah, that's what I meant. Essentially, take a look at the ipFamilies field in the Service resource, and in case it's ipv4 or ipv6 (but not both), allocate an IP address for that family.

In the future, we can look at single Service dual-stack support.

Yeah, once there's one or another way to "annotate" multiple load balancer IPs (be it annotations or fields). For the time being, people looking to expose something dualstacked can just deploy two services, one for each address family (disappointing hack, but a workaround until there's a better way).

@rsmitty
Copy link

rsmitty commented Oct 26, 2023

Hey @displague, hope you're well. Has there been any revised discussion around IPv6 support there? The Sidero Labs team is working on a new dual-stack cluster, but we're unable to create dual stack services with nginx-ingress.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 31, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 1, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 31, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants