New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

F5 LoadBalancer #12

Closed
stevesloka opened this Issue Aug 19, 2015 · 58 comments

Comments

Projects
None yet
@stevesloka

stevesloka commented Aug 19, 2015

I'm looking to see what the effort would be to write an F5 integration. Currently my enterprise uses RHEL Atomic hosts with an F5 load balancer. We can access the F5 API to programmatically create virtual servers.

On AWS, I can specify the "LoadBalancer" service type, and it auto creates the ELB + configures. I'm looking for a way to do exactly the same thing. I am 100% cool with doing the effort, just looking for some strategy around how / where / and integration.

Is this pluggable where I could mirror how it works on AWS? Or do I need to change approaches. It's probably important to be pluggable as I"m not building k8s myself, rather, using the RHEL packages from Atomic upgrades.

@bprashanth

This comment has been minimized.

Member

bprashanth commented Aug 19, 2015

Nice!

So you have 3 options:

  1. Write a sham cloud provider that implements a TCPLoadBalancer interface (https://github.com/kubernetes/kubernetes/blob/48184026f16fd5a1af6cb8d754fc175115949a87/pkg/cloudprovider/cloud.go#L78). I would recommend this if you wanted something working now and you're running outside a cloud. This approach has its limitations.
  2. Write a stand alone F5 loadbalancer controller pod. Here's an example of the same thing with haproxy: https://github.com/kubernetes/contrib/tree/master/service-loadbalancer. This is better in the long term, because we're moving toward a plugin model (kubernetes/kubernetes#12827) and once we're there you will just have to drop your pod in as a plugin with some minor changes.
  3. Wait for openshift's f5 plugin. They already have a wip which they'll probably check into kubernetes when we have the api ironed out.
@stevesloka

This comment has been minimized.

stevesloka commented Aug 19, 2015

Thanks for the quick feedback!

For option 1, I would need to build a customer k8s binary right? Seems like option 2 is the easiest to get running right now for me. I may play around with the HAProxy one to get a feel and then dive into the F5 specifically.

Also, I've only contributed docs back, so I'm pumped to be able to start doing some "real coding". I may update with some additional questions if this issue is appropriate.

@bprashanth

This comment has been minimized.

Member

bprashanth commented Aug 19, 2015

2 is the least invasive, though you might have to write more code. Sounds like that's a positive in your case.

A couple of things:

  1. This is how you subscribe to services and endpoint changes: https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L358. You basically just want to dynamically configure your f5 upon each change.
  2. Haproxy is running on a node in your cluster, so it can route to the endpoint ips. This probably isn't the case for the f5. The easiest way to overcome this is to have f5 -> haproxy(in your cluster) -> endpoints. the haproxies have real ips (see the README) which you plug into the f5.

There are better ways to solve 2 that bypass the haproxy layer (setup a tunnel from f5 -> to a node in your cluster) but I haven't thought them through.

@stevesloka

This comment has been minimized.

stevesloka commented Aug 19, 2015

I'm thinking if I can get notified when the service is created then all I need to do it write config to F5 with the NodePort assigned (or requested by user), but that seems too simple. Also, ideally would be better to not need the HAProxy if I can avoid since F5 should provide for me.

For the first approach where I implement the sham cloud provider, is that an API server change? I'm not completely clear on where that fits.

@bprashanth

This comment has been minimized.

Member

bprashanth commented Aug 19, 2015

I'm thinking if I can get notified when the service is created then all I need to do it write config to F5 with the NodePort assigned (or requested by user), but that seems too simple.

That's right, if you don't want to route to the endpoints directly you can send traffic to service with nodeport.

For the first approach where I implement the sham cloud provider, is that an API server change? I'm not completely clear on where that fits.

You will have add your new provider into pkg/cloudprovider/providers and restart the cluster with the right settings.

@stevesloka

This comment has been minimized.

stevesloka commented Aug 19, 2015

So to be a total noob, where is the path of this "pkg/cloudprovider/providers" in the cluster? I'm assuming this needs compiled into binary first? Would I just need to push new API server and not kubelet to each node?

@bprashanth

This comment has been minimized.

Member

bprashanth commented Aug 19, 2015

Yeah you can't just drop a binary into your cluster for a cloudprovier, that's the pod option. For cloud provider you will need to do: http://kubernetes.io/v1.0/docs/getting-started-guides/scratch.html#cloud-providers

@stevesloka

This comment has been minimized.

stevesloka commented Aug 20, 2015

I'm having some issues with simple dev on the code running in my pod and these are most certainly around my newness to developing k8s.

I have my dev environment setup and am running k8s locally via "hack/local-up-cluster.sh". When I run the pod in my env, I get an issue trying to auth to the https endpoint (when I only have http) which makes sense.

So I then spun up an environment via the docker version. When I run that I get this error:
service_loadbalancer.go:436] Failed to create client: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory

How do you folks typically do dev? Do you have a cluster running in GCE and just use kubectl proxy somehow? Should I be using a Vagrant command? I realize there are probably a bunch of ways to solve, I just am looking for a nice path.

@brendandburns

This comment has been minimized.

Contributor

brendandburns commented Aug 20, 2015

To be honest, we mostly use GCE. I think that the Red Hat folks use
vagrant.

--brendan

On Thu, Aug 20, 2015 at 10:29 AM, Steve Sloka notifications@github.com
wrote:

I'm having some issues with simple dev on the code running in my pod and
these are most certainly around my newness to developing k8s.

I have my dev environment setup and am running k8s locally via
"hack/local-up-cluster.sh". When I run the pod in my env, I get an issue
trying to auth to the https endpoint (when I only have http) which makes
sense.

So I then spun up an environment via the docker version. When I run that I
get this error:
service_loadbalancer.go:436] Failed to create client: open
/var/run/secrets/kubernetes.io/serviceaccount/token: no such file or
directory

How do you folks typically do dev? Do you have a cluster running in GCE
and just use kubectl proxy somehow? Should I be using a Vagrant command? I
realize there are probably a bunch of ways to solve, I just am looking for
a nice path.


Reply to this email directly or view it on GitHub
#12 (comment).

@stevesloka

This comment has been minimized.

stevesloka commented Aug 20, 2015

I'm building a pod that will listen for events when services are created / deleted to support my F5 needs. Do you have any thoughts around a faster dev cycle other than: 1. Build binary 2. Make image 3. Deploy image 4. Update RC 5. Deploy pod?

@brendandburns

This comment has been minimized.

Contributor

brendandburns commented Aug 20, 2015

You can hack around by using "docker run -v ..." to mount the secret
yourself on your dev workstation without using Kubernetes.

Essentially simulate what it would be like in the kubernetes cluster.

Or you can build the image out on the Kubernetes nodes, but that basic
workflow is what is generally needed...

--brendan

On Thu, Aug 20, 2015 at 10:36 AM, Steve Sloka notifications@github.com
wrote:

I'm building a pod that will listen for events when services are created /
deleted to support my F5 needs. Do you have any thoughts around a faster
dev cycle other than: 1. Build binary 2. Make image 3. Deploy image 4.
Update RC 5. Deploy pod?


Reply to this email directly or view it on GitHub
#12 (comment).

@bprashanth

This comment has been minimized.

Member

bprashanth commented Aug 20, 2015

You should be able to use localcluster. Don't start with the pod, just make a go program that runs your control loop. You can actually point your localhost go program at a remote cluster using the kubectl proxy.

  1. run kubectl proxy (it'll start up on 8001 or somthing)
  2. you should be able to curl http://localhost:8001 and hit your remote apiserver
  3. You can now create a kubernetes client like so:
kubeClient := client.NewOrDie(&client.Config{Host: "http://127.0.0.1:8001", Version: testapi.Version()})
  1. Get this controller wroking, then swap out the client creation logic for NewInCluster
  2. Run this in a pod on local cluster
  3. Push to remote cluster
@stevesloka

This comment has been minimized.

stevesloka commented Aug 20, 2015

Ahh ok sweet. That's what I was looking for, a way to run and I thought kubectl proxy would help. Awesome, let me give that a go. =)

@stevesloka

This comment has been minimized.

stevesloka commented Aug 21, 2015

Is there any way to print out the requests that "kubectl proxy" is making? Where I'm going is, I can get the services created when everything initially loads. I put logs in the "eventHandlers" section to basically print "got add", "got delete", etc. Once the app is running, I'll try and add a service, but doesn't seem like events are firing.

Here's my logs code:

eventHandlers := framework.ResourceEventHandlerFuncs{
        AddFunc:    func(cur interface{}) {
            fmt.Println("-----------> got an add!")
            enqueue(cur)
        },
        DeleteFunc: func(cur interface{}) {
            fmt.Println("-----------> got a delete!")
            enqueue(cur)
        },
        UpdateFunc: func(old, cur interface{}) {
            fmt.Println("-----------> got an update!")
            if !reflect.DeepEqual(old, cur) {
                enqueue(cur)
            }
        },
    }
@bprashanth

This comment has been minimized.

Member

bprashanth commented Aug 21, 2015

kubectl proxy --v=10
Just try running the serviceloadbalancer control loop with --dry=true and the proxy to make sure you have that working. https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L65

replace the client with you proxy client: https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L431

@stevesloka

This comment has been minimized.

stevesloka commented Aug 23, 2015

OK sweet I'm in a much better state now, I can get events and see what's going on now when services are registered and deleted, etc.

Quick design question, do you want me to extend the current code to have a flag to handle the F5? Or would you rather have a separate project?

@brendandburns

This comment has been minimized.

Contributor

brendandburns commented Aug 23, 2015

Personally, having a general purpose service watcher, with multiple plugins
(ha proxy, f5, etc) seems to be best. Share as much code as we can.

Brendan
On Aug 22, 2015 6:57 PM, "Steve Sloka" notifications@github.com wrote:

OK sweet I'm in a much better state now, I can get events and see what's
going on now when services are registered and deleted, etc.

Quick design question, do you want me to extend the current code to have a
flag to handle the F5? Or would you rather have a separate project?


Reply to this email directly or view it on GitHub
#12 (comment).

@bprashanth

This comment has been minimized.

Member

bprashanth commented Aug 23, 2015

Agreed. That's why I split loadbalancer config (https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L127) and loadbalancer controller (https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L229). You should be able to start up the loadbalancer controller with forward-services=true (https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L103) to get the behavior you want.

The loadbalancer config is currently a really simple interface consisting of 2 methods: write and reload. The write method will get called with some service structs (https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer.go#L115) and your loadbalancer can do with it what it likes. You can extend the loadbalancer config interface, If you need to.

If you add unittests (https://github.com/kubernetes/contrib/blob/master/service-loadbalancer/service_loadbalancer_test.go) I promise to preserve them when i extend the service loadbalancer to watch any new resources we add (eg: l7 ingress points)

@stevesloka

This comment has been minimized.

stevesloka commented Aug 24, 2015

Small API question: I want to get a list of all the nodes in the cluster to be able to update F5 accordingly. I'm new to the api, would this be somewhat the right syntax?

lbc.nodeLister.Store, lbc.nodeController = framework.NewInformer(
        cache.NewListWatchFromClient(
            lbc.client, "nodes", namespace, fields.Everything()),
        &api.NodeList{}, resyncPeriod, eventHandlers)

I get an error on &api.NodeList{} saysing that the server couldn't find the requested resource. I know I'm missing something small with the API. I was using this for reference: https://github.com/kubernetes/kubernetes/blob/264a658afab60fb858cfc5886b86ed2a77779cd9/pkg/client/unversioned/cache/listers.go#L103

@bprashanth

This comment has been minimized.

Member

bprashanth commented Aug 24, 2015

What you're trying to do is setup a watcher. This is more complicated than simply getting a list of nodes, it will call you eventHandlers every time a node changes (add, update, delete) and if you list the nodeLister.Store, you're listing from memory not etcd or the apiserver.

To do a simple onetime nodelist:

nodes := []api.Node{}
nodeList, err := client.Nodes().List(labels.Everything(), fields.Everything())
if err != nil {
    return
}
for i := range nodeList.Items {
    nodes = append(nodes, nodeList.Items[i])
}
return nodes

To answer your real question:
The problem is nodes are not namespaced and NewListWatchFromClient tries to apply a namespace. So if you want a watcher on nodes you can do something like:

    lbc.nodeLister.Store, lbc.nodeController = framework.NewInformer(
        &cache.ListWatch{
            ListFunc: func() (runtime.Object, error) {
                return lbc.client.Get().
                    Resource("nodes").
                    FieldsSelectorParam(fields.Everything()).
                    Do().
                    Get()
            },
            WatchFunc: func(resourceVersion string) (watch.Interface, error) {
                return lbc.client.Get().
                    Prefix("watch").
                    Resource("nodes").
                    FieldsSelectorParam(fields.Everything()).
                    Param("resourceVersion", resourceVersion).Watch()
            },
        },
        &api.Node{}, resyncPeriod, nodeHandlers)
@eparis

This comment has been minimized.

Member

eparis commented Sep 1, 2015

@stevesloka

This comment has been minimized.

stevesloka commented Sep 14, 2015

I've been waiting to get access to my internal F5 instance to start working on the integration. We also use Infoblox to manage our DNS. I was thinking that potentially that should be a different container? My goal with our integration is to 1. create the service 2. create the VIP on the F5 + config 3. create the dns automatically to point to the vip created in step 3.

Any thoughts on that approach or if I should split out the dns component?

@brendandburns

This comment has been minimized.

Contributor

brendandburns commented Sep 14, 2015

Yeah, that seems reasonable. You could separate out the 2 components so that someone who is using Infoblox, but not an F5 could re-use your DNS component, but that's not strictly necessary.

--brendan

@bprashanth

This comment has been minimized.

Member

bprashanth commented Sep 18, 2015

I noticed @stensonb and Chris Snell working on a similar effort on irc, so perhaps you guys can share notes

@chrissnell

This comment has been minimized.

chrissnell commented Sep 18, 2015

Thanks for steering me here, @bprashanth. So, yes, we are indeed working on such an effort. We wanted to avoid having to use HAproxy instances running within the cluster to proxy the traffic in from the (external) F5. So, we decided to use NodePorts instead.

We're building a RESTful service called lbaasd that watches Nodes and Services events and updates F5 VIPs accordingly. It works like this:

  1. You'll create a new VIP in lbaasd and supply a service name along with a port name (from the service spec), as well as some frontend VIP configuration (what protocol, what port, SSL, etc.) and a "class" of IP (e.g. public, private, etc.) that you would like to use for the VIP.
  2. lbaasd talks to a companion service, called cidrd to obtain a "lease" on an IP for your VIP. Within cidrd, your ops team defines classes of IP space (e.g. public, private, etc.) and allocates blocks of IPs to these classes, which then get handed out to lbaasd and configured on the F5 device.
  3. lbaasd obatains the NodePort for your service from the Kubernetes API, along with a selection of nodes to serve as backend pool members. With the algorithm that we use for node selection, your VIPs are evenly spread across the backend nodes to minimize network traffic hot spots.
  4. lbaasd then creates a VIP on the F5 via the iControl REST API, pointing it at the NodePort on the pool member nodes it chose in step 3.
  5. lbaasd sets up a watch of your service and a watch of all nodes and updates the F5 if nodes fall off, or NodePorts change, etc.

There will be a command line util (lbaasctl) to create/list/update/delete VIP objects, but it's all RESTful so you could also integrate the lbaasd API into your workflow.

We're working like mad to get this to a functional state. I'm going to submit to the KubeCon CFP this week and hope to have this thing functional in time for the conference, regardless of whether we present.

@chrissnell

This comment has been minimized.

chrissnell commented Sep 18, 2015

I also want to add that we abstracted the load balancer functionality using a Go interface so that any type of load balancer could conceivably be supported. We're going to be building the F5 implementation of that interface because that's what we run on but someone could implement this for Nginx Plus, or any other programmable load balancer.

@stevesloka

This comment has been minimized.

stevesloka commented Sep 18, 2015

@chrissnell Awesome! That's exactly what I was going to attempt to do with another addition. I've been blocked trying to get access to our internal F5 to do development. I'm glad to help out and contribute since it seems you are much further ahead than I am.

The only other addition I had planned on making was creating DNS tied to the VIP used for the F5 so that we could get a nice "dynamic" name that went with the service.

Do you have your code up somewhere? Again, would love to help collaborate, but am stuck on no F5 access at the moment so working on dns piece (with Infoblox).

@smarterclayton

This comment has been minimized.

smarterclayton commented Sep 18, 2015

@stevesloka I don't know if you've seen https://github.com/openshift/origin/tree/master/plugins/router/f5, but when Ingress lands we should have the conversion code so you can just run the image (https://hub.docker.com/r/openshift/origin-f5-router/ which is currently not being pushed, should be by end of day) against a Kube server with Ingress points.

@smarterclayton

This comment has been minimized.

smarterclayton commented Sep 18, 2015

Right now the router pulls Routes (the seed for Ingress), once Ingress is possible the router will consume Ingress as well.

@stevesloka

This comment has been minimized.

stevesloka commented Sep 18, 2015

@smarterclayton Sweet! Looks like I could potentially use the f5 integration as-is. My thought was to do what @chrissnell was going to do and just wire up to NodePorts, which potentially could be an intermediate step.

Could you give me the quick story behind Ingress? Just a type of routing?

I am going to try and get the NodePort routing pieces working now then can swap out later.

@softwaredefinedworld

This comment has been minimized.

softwaredefinedworld commented Dec 18, 2015

Is service-loadbalancer concept already integrated into the main tree. Can we rely on it to work or is it just a new type of pod defined which uses ha_proxy to create divert traffic to services based on configuration and doesnt require any other additional support from Kubernetes

@bprashanth

This comment has been minimized.

Member

bprashanth commented Jan 18, 2016

@softwaredefinedworld didn't see your question. There is a resource called Ingress in main tree that embodies the concepts in service loadbalancer. See: https://github.com/kubernetes/kubernetes/blob/release-1.1/docs/user-guide/ingress.md

@stevesloka

This comment has been minimized.

stevesloka commented Feb 8, 2016

Just an update on my F5 integration (it's been a while since I last was able to pick this up). I'm going to work to finally make this a thing. Since my branch is super old, just going to grab latest and go from there.

@bprashanth

This comment has been minimized.

Member

bprashanth commented Feb 8, 2016

@stevesloka great! would be great to see an ingress controller for f5 (https://github.com/kubernetes/contrib/blob/master/Ingress/controllers/README.md). Those instructions should be up-to-date but you can always ask back if something at HEAD is out of sync and doesn't work.

@chrissnell

This comment has been minimized.

chrissnell commented Feb 8, 2016

@bprashanth Maybe I'm missing something here but it seems that the big missing piece for the F5 is participation in the overlay/container network. I don't know if @stevesloka is doing something differently but our F5s live outside of Kubernetes and the overlay network (for us, Flannel) and don't have direct access to services unless they are exposed via NodePort. Is NodePort configuration available through the Ingress interface?

I hear it's possible to get F5s to participate in the overlay network--they are just Linux servers, more or less--but I'm not sure if we risk rendering our devices unsupported if we do that.

@bprashanth

This comment has been minimized.

Member

bprashanth commented Feb 8, 2016

doing something differently but our F5s live outside of Kubernetes and the overlay network (for us, Flannel) and don't have direct access to services unless they are exposed via NodePort. Is NodePort configuration available through the Ingress interface?

That's the same for any cloud, Services are a different plane from Ingress. A service can exist as nodeport and/or type=l4-loadbalancer, and still be part of an ingress. In fact having nodeport services is a requirement for gce ingress controller to work. Employing the Ingress as a metaphor makes sense if you need fanout (1ip -> multiple hostnames -> each with multiple urls going to different kubernetes service:ports) or tls termination, sni, re-encrypt etc.

You can, for example, deploy an nginx per node as a daemon set, just like node port, listening on :80, but proxying traffic to all services specified in an Ingress at L7 using different hostnames and urls.

@bprashanth

This comment has been minimized.

Member

bprashanth commented Mar 10, 2016

@chiradeep

This comment has been minimized.

chiradeep commented Mar 10, 2016

@chrissnell you are correct. Here's something I've put together for Netscaler and K8s (+ Mesos/Swarm)
https://github.com/chiradeep/nitrox
Adapting it for other HW LB is as simple as writing an implementation for configure_app:
https://github.com/chiradeep/nitrox/blob/master/netscaler.py#L188

@bprashanth

This comment has been minimized.

Member

bprashanth commented Mar 10, 2016

@chiradeep @chrissnell @stevesloka I would really like to get something that connects HW lbs and Kubernetes into contrib, so we can work on extending it to support more loadbalancers. This has been asked by enough people that I get the feeling we'll get contributors if we get the ball rolling. I don't care if it's Ingress or Service or just straight to endpoints :)

@stevesloka

This comment has been minimized.

stevesloka commented Mar 10, 2016

I've been trying to get access to my corp F5 instance and yet have not been able to. The problem is that our version of F5 won't restrict my access to just my stuff, so they need to give me root access which my IT team doesn't want to do. So I'm stuck at the moment on integrating to that piece and testing it out.

I do have the InfoBlox integration complete which gave us DNS. I can give back that piece now which you can see here if that's of interest: https://github.com/upmc-enterprises/contrib/tree/f5-2/service-loadbalancer

@chiradeep

This comment has been minimized.

chiradeep commented Mar 10, 2016

Does it have to be in Go? Go clients are usually not available for HW LB (but can be done).

@bprashanth

This comment has been minimized.

Member

bprashanth commented Mar 10, 2016

@chiradeep no it does not. Go is preferred because a lot of the Kubernetes community is familiar with it, so they'd probably be more willing to jump in and fix issues.

@chrissnell

This comment has been minimized.

chrissnell commented Mar 10, 2016

The F5 REST API is straightforward and easy enough to implement in Go. The block that we ran into is building the state machines to watch the K8S API and handle services and nodes coming and going. We made good progress and then got pulled away on other projects. We still want this and if it's not implemented elsewhere, I hope to pick up the project again.

@Lucius-

This comment has been minimized.

Lucius- commented Mar 11, 2016

@stevesloka Have you thought about using one of F5's virtual editions to test against? You can get a 90-day trial or an request an eval version here

@stevesloka

This comment has been minimized.

stevesloka commented Mar 11, 2016

Thanks @Lucius-, let me check that out and see if I can get it running.

@bprashanth bprashanth referenced this issue Apr 12, 2016

Closed

Loadbalancing umbrella issue #24145

3 of 17 tasks complete
@stevesloka

This comment has been minimized.

stevesloka commented May 3, 2016

Just another update, I am working on it. F5 team has been awesome to help support me and I've got my own instance running. I'm looking at what RedHat did with Origin, looks like they have it all mostly worked out, so will keep chugging from here. Thanks!

@vipulsabhaya

This comment has been minimized.

vipulsabhaya commented May 3, 2016

@stevesloka Hey looking forward to seeing F5 support. Curious if you're planning on doing this with an IngressController.

We are also looking at adding support for F5 in K8, and would like to collaborate on this with you, if you're at that point.

@stevesloka

This comment has been minimized.

stevesloka commented May 3, 2016

Hey @vipulsabhaya! Right now I'm working on NodePorts implementation since it's the simplest for my company to get rolling. But I've talked with @bprashanth a bit about wanting an Ingress version as well.

I've been trying to get heads down on this to get something up to look at and play with.

@bprashanth

This comment has been minimized.

Member

bprashanth commented May 3, 2016

We initially started service-loadbalancer as a Service centric thing (https://github.com/kubernetes/contrib/tree/master/service-loadbalancer), but got a lot of feedback about centralized policy and security configuration, the need for a cross-platform lb api etc. I think the ingress controller is the desired goal, but we can get there incrementally. If annotations on a Service is the easiest thing lets start with that.

Fyi the gce ingress controller mandates nodeport services (https://github.com/kubernetes/contrib/tree/master/ingress/controllers/gce).

@bpradipt

This comment has been minimized.

bpradipt commented Aug 5, 2016

I'm not very much familiar with F5 LB, but is making the F5 LB part of the VXLAN/flannel network as described in [1] an option ? Anyone have any experience with such kind of a setup ?

[1] https://f5.com/resources/white-papers/vxlan-and-the-big-ip-platform

@smarterclayton

This comment has been minimized.

smarterclayton commented Aug 5, 2016

Not directly equivalent, but routing_from_edge_b.html
https://docs.openshift.org/latest/install_config/routing_from_edge_lb.html#establishing-a-tunnel-using-a-ramp-node
describes
what you can do if you want to ramp from an F5 if you can't make it part
of VXLAN. I know F5 was looking at updating to be compatible with the
kernel and version of OVS we used but haven't looked recently to see if it
happened.

On Fri, Aug 5, 2016 at 10:23 AM, Pradipta Banerjee <notifications@github.com

wrote:

I'm not very much familiar with F5 LB, but is making the F5 LB part of the
VXLAN/flannel network as described in [1] an option ? Anyone have any
experience with such kind of a setup ?

[1] https://f5.com/resources/white-papers/vxlan-and-the-big-ip-platform


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
#12 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_pzLJBqrtBcg8i1E_78Se1LJcODnbks5qc0dLgaJpZM4FuCJ4
.

@chen23

This comment has been minimized.

chen23 commented Aug 5, 2016

@bpradipt I've used the F5 Python SDK to populate the FDB table and ARP entries following the documentation from: https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/tmos-implementations-12-1-0/8.html

I did this in my spare time as an example, but I cannot vouch for whether this is the "correct" way to network with flannel network. Here's the example: https://github.com/f5devcentral/f5-icontrol-codeshare-python/tree/master/kubernetes-example In my lab environment it allows the BIG-IP to route to the host via VXLAN. Related article: https://devcentral.f5.com/articles/f5-python-sdk-and-kubernetes-21045

@vipulsabhaya

This comment has been minimized.

vipulsabhaya commented Aug 5, 2016

We've added support for an F5 backend to a new service loadbalancer implementation #1343 -- we plan to update this and consume Ingress as Ingress evolves to support TCP load balancing

@pires

This comment has been minimized.

Member

pires commented Nov 24, 2016

F5 plug-in for Openshift has moved to https://github.com/openshift/origin/tree/master/pkg/router/f5.

@fejta-bot

This comment has been minimized.

fejta-bot commented Dec 18, 2017

Issues go stale after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment