-
Notifications
You must be signed in to change notification settings - Fork 386
feat: enable consul-servers to be accessed externally #27
Conversation
30d0d58
to
7f44a3b
Compare
resolves #28 |
Just a note on this: we plan on taking a look more carefully at how to enable multi-DC (multi-Kube too) Consul deployments. This might be perfectly good as-is but that's why we haven't touched it yet. We're just fixing some other bugs first. |
7f44a3b
to
7d1c7f6
Compare
Updated PR with rbac fuctionality. Also improved comments in regards to the limitations of using the LoadBalancer service type. This PR creates a LoadBalancer for each consul-server in the cluster that serves only TCP traffic. WAN communication with another datacenter does work, but the lack of UDP traffic is less than ideal. The kubernetes issue to track is: kubernetes/kubernetes#23880 |
Great PR, highly needed. |
a9fe750
to
cb60544
Compare
cb60544
to
35f72e0
Compare
35f72e0
to
d674dce
Compare
d674dce
to
69984fe
Compare
The PR is superb. Is it going to be merged? |
Related Consul improvement: 6356 |
69984fe
to
1f1c6fd
Compare
I understand that hashicorp is looking at a better way to do federation across multiple clusters. But I've gone ahead and rebased this PR against master, so that this continues to be an option to engineers who want it. |
8b76309
to
75c2b86
Compare
75c2b86
to
175c2b0
Compare
I'd be interested in this as I don't currently want to deploy another DC and right now while I have consul-server being accessed outside K8s, the pod IPs are not stable and I'm a bit worried about that longer term. |
Hey everyone, I don't think this PR makes sense anymore now that we have Consul 1.8.0 which supports federation through mesh gateways (https://www.consul.io/docs/k8s/installation/multi-cluster/overview). The only reason (that I know of) to expose each server individually as a service is for the federation use-case which is now solved through mesh gateways. @chancez can you explain more about your use case? Why do you need the consul servers to be accessed externally? And could this maybe be solved by having a single service front the servers rather than one service per server? Thanks! |
@lkysow I'm mostly trying to get a bare minimum consul server setup in EKS with clients on-premise. I don't want to go through and setup an on-premise consul server cluster. Right now it's actually working because I've exposed the consul ports on the worker nodes so that the pods in EKS can be accessed on prem. Would mesh gateways allow me to funnel all of the on prem client-agents to the servers in EKS, without a local consul server cluster on prem? |
Okay thanks that really helps. So basically you're using a form of #332 to expose the server ports on node ips. Mesh gateways will not help with your use-case because they're only for datacenter-to-datacenter communication, i.e. server-to-server, not client-to-server. I don't think this PR is a good solution for you in this case because the lan ip would need to be the load balancer ip and because cloud lb's are expensive. Instead I'd you continue with how you've set things up and we'll look to get that other PR merged. |
For properly creating a multi-datacenter federated consul cluster. Creates an external LoadBalancer service for each consul server. Depends on the end-user knowing the correct annotations for creating an external LoadBalancer service for their cloud provider.