Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: Declarative Configuration of Multi-Cluster Ingress #794

Closed
samcgardner opened this issue Jul 9, 2019 · 5 comments
Closed

Proposal: Declarative Configuration of Multi-Cluster Ingress #794

samcgardner opened this issue Jul 9, 2019 · 5 comments
Assignees

Comments

@samcgardner
Copy link

At the moment, it does not appear to be possible to configure multi-cluster Ingress in a purely declarative fashion (i.e. via an Ingress resource). I'm going to summarise the state of play as I understand it and then propose a solution (which I am willing to implement).

There is currently a tool and Ingress class associated with multi-cluster Ingress, namely kubemci and the related class gce-multi-cluster. These tools assume the following workflow:

  1. Write an Ingress object
  2. Build a kubeconfig file listing all target clusters
  3. Run kubemci to apply this Ingress to all target clusters

This appears to work well in my testing, but is problematic from an operational standpoint. My use-case relies heavily on declarative configuration, and the requirement to run kubemci repeatedly is challenging. Additionally, kubemci does not support Network Endpoint Groups which, reading between the lines, appear to be the focus of Google's efforts to support cluster Ingress moving forwards.

Accordingly, I think a new Ingress class that leverages NEGs to configure multi-cluster Ingress would be useful for me and other GKE users who rely on declarative configuration but wish to serve multiple clusters from an Ingress. I'd like to propose a new Ingress class which will attempt to configure itself in a purely declarative fashion as follows:

  1. Read back all existing L7 load balancers and see if one exists for the Ingress. If not, create it.
  2. Ensure a target proxy exists for the load balancer.
  3. Ensure a backend service exists for the load balancer.
  4. Ensure that NEGs for this cluster only exist and are registered with the backend service.

I believe this should be sufficient to allow declarative configuration of multi-cluster Ingress, but before I drop a PR on you folks I'd like to check a) to see whether you have any feedback on this proposal (as I'm sure you understand the internal of the GCLB much better than I do) and b) that you have bandwidth to review.

Thanks for reading!

@samcgardner samcgardner changed the title Declarative Configuration of Multi-Cluster Ingress Proposal: Declarative Configuration of Multi-Cluster Ingress Jul 9, 2019
@rramkumar1
Copy link
Contributor

Adding @bowei so he can respond to this.

/assign @bowei

@bowei
Copy link
Member

bowei commented Jul 23, 2019

How do the other cluster's endpoints get added to the LB in this case?

@samcgardner
Copy link
Author

samcgardner commented Jul 23, 2019

The core idea is for the controller in each cluster to set the backend service's endpoints to current contents ∪ that cluster's endpoints. To outline the full procedure in a simple two-cluster case with no concurrency issues:

  1. Cluster 0 is created with a declarative multi-cluster Ingress configured
  2. Ingress controller in cluster zero (I0 hereafter for brevity) creates a global forwarding rule
  3. I0 creates a target proxy
  4. I0 creates a URL map
  5. I0 creates a backend service
  6. I0 creates NEGs (NEG0)
  7. I0 sets the backends for the backend service to be the current contents (i.e. {}) ∪ NEG0
  8. Cluster 1 is created with the same Ingress object configured
  9. I1 skips creating a target proxy, URL map, and backend service as these already exist
  10. I1 sets backends for the backend service to be NEG0 ∪ NEG1

The weaknesses I can see in this are:

  1. Target proxies, URL maps, and backend servies will need to either have configurable names or very predictable ones, and there's a risk of clobbering things as the controller can't know what it does or doesn't manage
  2. NEGs will never get cleaned up, so use-cases which involve actually removing NEGs from a backend service will leak them unless they actually delete the NEG itself this is nonsense, the Ingress manages creating the NEGs so its finalizer can clean them up

@wy100101
Copy link

I'm interested in this because the current solution is so close to what I need OOTB. Today I can declaratively create a GCLB for a service running in GKE just by creating a service with the right annotation, and the corresponding ingress. Unfortunately, I can't create the same deployment in another cluster in another region that automatically gets added to the existing GCLB as additional backends.

I've tried adding the ingress using the same reserved IP and that doesn't work. I can add the negs to original LB, but that gets overwritten eventually, and probably isn't practical.

I just need a way to say add this service as a backend to an existing LB through some sort of annotation, and I'll have exactly what I need.

@samcgardner
Copy link
Author

I'm closing this because I think it's pretty clear at this juncture that this issue is not a going concern.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants