Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose a service with LoadBalancer on non-cloud providers using custom image for the load balancer #36220

Closed
dgonzalez opened this issue Nov 4, 2016 · 17 comments

Comments

Projects
None yet
@dgonzalez
Copy link

commented Nov 4, 2016

Here is one idea:

When you are running a cluster in your bare metal infrastructure, you cannot expose a service (or anything) using the LoadBalancer for obvious reasons.

When you are running a in a cloud provider, it uses the native load balancer (AKA Elastic Load Balancer in AWS) provisioning and configuring it.

My point is:

Why not have an extra flag in the expose command that allows you to specify the image to use as a load balancer in your bare metal cluster? E.g:

kubectl expose rc my-service --type LoadBalancer --image my-custom-ha-proxy

And that will deploy a container with my-custom-ha-proxy and use it on a similar way as it does for the ELB in AWS.

This came up in a conversation in the slack channel #kubernetes-users and the originator question was... "how do I know the IP of the caller within the cluster?" And this solution came up as you would be able to configure the x-forwarded-for header and send it to the destination service.

If this issue sounds like there is a better way of doing it... please reply on a comment and close it!

@justinsb

This comment has been minimized.

Copy link
Member

commented Nov 4, 2016

Can you just deploy your controller into kube-system ahead of time, and have it watch for services of Type=LoadBalancer? That way the person running kubectl expose would not need to know whether e.g. an F5 load balancer was in use, or something using DNS with hostPorts.

I do think your approach is interesting for when multiple LoadBalancer solutions are in use on the same cluster, but I don't think that is particularly widespread (?)

@dgonzalez

This comment has been minimized.

Copy link
Author

commented Nov 4, 2016

The problem with listening for Type=LoadBalancer is magical things happening in the background. With my approach, it is explicit: you have to specify an image for it to be provisioned (or possibly the service name of an already running one).

@dgonzalez

This comment has been minimized.

Copy link
Author

commented Nov 4, 2016

BTW I think the ingress controller is an interesting concept with a lot of potential.

@davetropeano

This comment has been minimized.

Copy link

commented Nov 11, 2016

@dgonzalez your comment about "[the] problem with listening for Type=LoadBalancer is magical things happening in the background." is what happens when you are using a cloud provider like AWS, GCE, Openshift, etc. Getting the instance of the LB is an async process that happens behind the scenes.

FWIW the service-loadbalancer contribution does not support listening against Type=LoadBalancer, instead they use custom labels and listen against those (and you don't set the Type field). https://github.com/kubernetes/contrib/tree/master/service-loadbalancer

@dgonzalez

This comment has been minimized.

Copy link
Author

commented Nov 14, 2016

@davetropeano I think I didnt explain myself well: What I am suggesting is provisioning a load balancer within the cluster using a custom image instead of an external load balancer within the cloud. I am swamped at the moment but ping me in the kubernetes slack (@Davidgonza) and we can talk more about it. Maybe that is the solution but I need to picture it.

@bgrant0607 bgrant0607 added sig/network and removed area/kubectl labels Nov 17, 2016

@bgrant0607

This comment has been minimized.

Copy link
Member

commented Nov 17, 2016

@MikeSpreitzer

This comment has been minimized.

Copy link
Member

commented Nov 18, 2016

(A) I agree with the concern expressed earlier about separation of concerns. The person deploying a service should not have to know how the cluster is implemented. I would rather see this custom LB image specified as a parameter of the controllers manager. Perhaps the best way to slip this in is to make a cloud provider that just knows how to make load balancers using a custom image.

(B) I assume the idea here is to make a new container for each k8s service. http://kubernetes.io/docs/user-guide/services/#type-loadbalancer says it is up to the cloud provider whether the user may supply spec.loadBalancerIP. In the proposal at hand, does the custom LB image have the option to forbid the user to specify spec.loadBalancerIP? If the user does not specify spec.loadBalancerIP then how is the ingress IP address chosen? Is it in the pod subnet, the host subnet, or some other?

(C) I assume the idea is to make a pod for each LoadBalancer type of k8s Service using the custom LB image. Is that pod on the host network or the normal pod network? How does a packet sent from an external client to the Service's status.loadBalancer.ingress.ip make its way from that client into the pod? How does the reply get back to the client, without tripping spoofing detectors? To illustrate part of the concern here, consider the example of a cluster that uses Flannel or Calico networking. These do NOT bridge an external ethernet to something in or adjacent to a pod, so how would an ARP request, broadcast on the external ethernet, seeking the status.loadBalancer.ingress.ip trigger a response?

@MikeSpreitzer

This comment has been minimized.

Copy link
Member

commented Nov 18, 2016

(D) Is the idea here to develop one generic custom LB image to use in bare metal installs, or is it expected that different bare metal environments will each need their own custom LB image?

@lichen2013

This comment has been minimized.

Copy link
Contributor

commented Dec 5, 2016

Hi,

I have a similar idea, to use a haproxy deployment between the service and pods.
So, when we work without a cloud provider, we can still have ELB. Or, we say the cloud provider is 'local'.
Can this work ?

elb in kubernetes

@bprashanth

This comment has been minimized.

Copy link
Member

commented Dec 5, 2016

@dgonzalez do you really want to couple the deployment of the loadbalancer to the exposure of a single backend, or do you just want a loadbalancer implementation for bare metal (and deploying one lb per pod is just a means to an end)?

@dgonzalez

This comment has been minimized.

Copy link
Author

commented Mar 1, 2017

Sorry I'be been disconnected from this for a while.

@bprashanth no, what I want to is be able to create my own balancer on bare metal but this is also useful for cloud. Example: I want to throttle the requests to my app. What I would do in here is create a nginx image with the configuration for it and expose my service as LoadBalancer using this image as the load balancer. Does that makes sense?

@uiansol

This comment has been minimized.

Copy link

commented Jul 27, 2017

I also interested in something like this. I need to be able to create load balancer on-premises without a cloud provider.

@dgonzalez Did you have some progress on this feature? I would like to help in some discussion and maybe a proposal for this feature.

@stepin

This comment has been minimized.

Copy link

commented Dec 14, 2017

Looks like now custom Cloud Controller Manager can be used for such tasks (custom LoadBalancer on bare-metal):
https://kubernetes.io/docs/tasks/administer-cluster/developing-cloud-controller-manager/

@fejta-bot

This comment has been minimized.

Copy link

commented Mar 14, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

commented Apr 15, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@MarkusTeufelberger

This comment has been minimized.

Copy link

commented May 4, 2018

https://github.com/google/metallb might be worth a look for everyone who finds this issue via web search etc.

@fejta-bot

This comment has been minimized.

Copy link

commented Jun 3, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.