Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support mixed protocols in service.type=loadbalancer #23880

Closed
bprashanth opened this issue Apr 5, 2016 · 66 comments · Fixed by #94028
Closed

Support mixed protocols in service.type=loadbalancer #23880

bprashanth opened this issue Apr 5, 2016 · 66 comments · Fixed by #94028
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/network Categorizes an issue or PR as relevant to SIG Network.

Comments

@bprashanth
Copy link
Contributor

It should be possible to use a single IP to direct traffic to multiple protocols with a single Service of Type=Loadbalancer. We just need to:

  1. Promote ephemeral to static ip
  2. Create another forwarding rule with the same static ip, but a different protocol/port

Unfortunately we need this dance because a single forwarding rule only supports 1 protocol.

When we do this we should make sure the firewall rules are opened up for the right protocol: https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/gce/gce.go#L927 (that just takes the first protocol for the firewall rule).

Is there a reason we didn't do this initially?

@bprashanth bprashanth added sig/network Categorizes an issue or PR as relevant to SIG Network. team/cluster labels Apr 5, 2016
@vsimon
Copy link
Contributor

vsimon commented Jun 6, 2016

It would be very nice to support this.

@samuraisam
Copy link

Does #24090 support this?

@therc
Copy link
Member

therc commented Jul 15, 2016

For AWS, you can use a combination of #23495 and #26268

@samuraisam
Copy link

Can it be accomplished on GCE?

@thomasbarton
Copy link

@samuraisam I was able to work around this limitation by creating 2 separate services of type=LoadBalancer one for tcp and one for udp. Using the parameter loadBalancerIP: XXX.XXX.XXX.XXX with the same ip address for both it works. I am posting this in case anyone else runs into the same issue.

@ensonic
Copy link

ensonic commented Mar 15, 2017

@thomasbarton Did you run it once, check what IP it got and then hard-coded the IP?

@thomasbarton
Copy link

@ensonic On GCE you can do that. But i recommend reserve the ip address first and making it static. Then create both services with the LoadBalanceIP set.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2017
@ensonic
Copy link

ensonic commented Dec 22, 2017

/remove-lifecycle stale
This would still be nice to have.

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2017
@ffledgling
Copy link

Bump. This would be really nice to have!

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2018
@ffledgling
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2018
@kiall
Copy link

kiall commented Jul 25, 2018

As a very very quick hack, I simply removed the validation code that enforces only a single protocol:

diff --git a/pkg/apis/core/validation/validation.go b/pkg/apis/core/validation/validation.go
index 7050c604e5..7747d527fc 100644
--- a/pkg/apis/core/validation/validation.go
+++ b/pkg/apis/core/validation/validation.go
@@ -3714,9 +3714,6 @@ func ValidateService(service *core.Service) field.ErrorList {
                                includeProtocols.Insert(string(service.Spec.Ports[i].Protocol))
                        }
                }
-               if includeProtocols.Len() > 1 {
-                       allErrs = append(allErrs, field.Invalid(portsPath, service.Spec.Ports, "cannot create an external load balancer with mix protocols"))
-               }
        }
 
        if service.Spec.Type == core.ServiceTypeClusterIP {

After this, I rebuilt kube-apiserver, and with MetalLB, mixed protocol load balancers now work just fine:

NAME               TYPE           CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
mixed-protocols    LoadBalancer   172.29.12.226   172.29.32.9   53:31874/TCP,53:31874/UDP     3m

I think the best course forward is to:

  1. Remove this test
  2. In all existing LB implementations, add an equivalent of this validation. Likely emitting an event against the LB, rather than failing the creation of the resource in the first place.

Any objections? I'm happy to implement when I have a little time.

Updated: After a few minutes, I spotted this event:

Type     Reason                Age              From                             Message
----     ------                ----             ----                             -------
Normal   IPAllocated           7m               metallb-controller               Assigned IP "172.29.32.9"
Warning  PortAlreadyAllocated  1m (x3 over 7m)  portallocator-repair-controller  Port 31874 was assigned to multiple services; please recreate service

Everything still works, I suspect the portallocator-repair-controller needs an update to consider port and protocol, rather than just port.

@tuminoid
Copy link

We're getting this complaint from portallocator-repair-controller on 1.10.4 also on NodePort service that exposes both TCP port and UDP port on the same NodePort.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@sarabjeetdhawan
Copy link

I apologize for flooding this thread with all this information. I just want to update for other folks who may get spooked by looking at my original comment about that workaround breaking my MetalLB.

By incorrectly sharing the keys between two different deployments, i broke the whole thing (Thanks @brandond for pointing that out). I had to reboot the master and nodes for it to start accepting connections on those ports again. Its working fine now.

If y'all like, I can cleanup this thread by deleting all my comments from here. Please let me know.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 22, 2020
@onedr0p
Copy link

onedr0p commented Sep 22, 2020

/remove-lifecycle stale

@maranmaran
Copy link

/remove-lifecycle stale
This would still be nice to have.

@TBBle
Copy link

TBBle commented Jan 12, 2021

Indeed. And #94028 implemented it as alpha for Kubernetes 1.20; it's still up to the actual Load Balancer implementations to support it, e.g., EKS/AWS needs some work.

It's possible some LB implementations support it without changes, e.g. a browse through the MetalLB source suggests this would just work, simply because it doesn't try to prevent it, and it already supported mixed protocols on the same IP allocation in aggregate.

@metost
Copy link

metost commented Aug 17, 2021

From GKE/GCE documentation:

The following example demonstrates how this is done to support multiple TCP and UDP ports against the same internal load balancer IP.

Create a static IP in the same region as your GKE cluster. The subnet must be the same subnet that the load balancer uses, which by default is the same subnet that is used by the GKE cluster node IPs.

If your cluster and the VPC network are in the same project:

gcloud compute addresses create IP_ADDR_NAME \
    --project PROJECT_ID \
    --subnet SUBNET \
    --address= IP_ADDRESS \
    --region REGION \
    --purpose SHARED_LOADBALANCER_VIP

If your cluster is in a Shared VPC service project but uses a Shared VPC network in a host project:

gcloud compute addresses create IP_ADDR_NAME \
    --project SERVICE_PROJECT_ID \
    --subnet projects/HOST_PROJECT_ID/regions/REGION/subnetworks/SUBNET \
    --address= IP_ADDRESS \
    --region REGION \
    --purpose SHARED_LOADBALANCER_VIP

https://cloud.google.com/kubernetes-engine/docs/how-to/internal-load-balancing#shared_VIP

@TBBle
Copy link

TBBle commented Aug 17, 2021

The mechanism described there is a manual workaround for the lack of this feature; I'm also not sure (I don't use GKE), but it looks like those docs are for something that works as a load balancer only to other things in the same VPC?

Sadly, it appears that the current latest cloud-provider-gcp implementations of both internal and external load balancers do not yet support MixedProtocolLBService, and will do the wrong things if it is enabled.

@bmwadforth
Copy link

MixedProtocolLBService as of right now is a beta feature gate for v1.24.

https://kubernetes.io/docs/concepts/services-networking/service/#load-balancers-with-mixed-protocol-types

I haven't tested this but you should now be able to create a cluster using that version in GCP and you will be able to define a LoadBalancer service with ports open on more than one protocol.

dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Signed-off-by: Daniele De Lorenzi <daniele.delorenzi@fastnetserv.net>
dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Signed-off-by: Daniele De Lorenzi <daniele.delorenzi@fastnetserv.net>
dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Signed-off-by: Daniele De Lorenzi <daniele.delorenzi@fastnetserv.net>
dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Signed-off-by: Daniele De Lorenzi <daniele.delorenzi@fastnetserv.net>
dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Signed-off-by: Daniele De Lorenzi <daniele.delorenzi@fastnetserv.net>
dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Signed-off-by: Daniele De Lorenzi <daniele.delorenzi@fastnetserv.net>
dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Signed-off-by: Daniele De Lorenzi <daniele.delorenzi@fastnetserv.net>
dark-vex added a commit to dark-vex/infra-cd that referenced this issue Aug 15, 2022
Signed-off-by: Daniele De Lorenzi <daniele.delorenzi@fastnetserv.net>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/network Categorizes an issue or PR as relevant to SIG Network.
Projects
None yet