Skip to content
This repository has been archived by the owner on Aug 25, 2021. It is now read-only.

feat: enable consul-servers to be accessed externally #27

Closed
wants to merge 1 commit into from

Conversation

tomwganem
Copy link
Contributor

For properly creating a multi-datacenter federated consul cluster. Creates an external LoadBalancer service for each consul server. Depends on the end-user knowing the correct annotations for creating an external LoadBalancer service for their cloud provider.

@tomwganem
Copy link
Contributor Author

resolves #28

@mitchellh
Copy link
Contributor

Just a note on this: we plan on taking a look more carefully at how to enable multi-DC (multi-Kube too) Consul deployments. This might be perfectly good as-is but that's why we haven't touched it yet. We're just fixing some other bugs first.

@tomwganem
Copy link
Contributor Author

Updated PR with rbac fuctionality.

Also improved comments in regards to the limitations of using the LoadBalancer service type. This PR creates a LoadBalancer for each consul-server in the cluster that serves only TCP traffic. WAN communication with another datacenter does work, but the lack of UDP traffic is less than ideal.

The kubernetes issue to track is: kubernetes/kubernetes#23880

@yaron2
Copy link

yaron2 commented Oct 24, 2018

Great PR, highly needed.

@hashicorp-cla
Copy link

hashicorp-cla commented Feb 28, 2019

CLA assistant check
All committers have signed the CLA.

@tomwganem tomwganem closed this Apr 9, 2019
@tomwganem tomwganem reopened this Apr 11, 2019
@lkysow lkysow added area/multi-dc Related to running with multiple datacenters enhancement New feature or request labels Sep 18, 2019
@Xtigyro
Copy link

Xtigyro commented Oct 17, 2019

@mitchellh @lkysow

The PR is superb. Is it going to be merged?

@adilyse
Copy link
Contributor

adilyse commented Nov 19, 2019

Related Consul improvement: 6356

@tomwganem
Copy link
Contributor Author

I understand that hashicorp is looking at a better way to do federation across multiple clusters. But I've gone ahead and rebased this PR against master, so that this continues to be an option to engineers who want it.

@tomwganem tomwganem force-pushed the github-external branch 2 times, most recently from 8b76309 to 75c2b86 Compare February 22, 2020 05:42
@chancez
Copy link
Contributor

chancez commented Jun 19, 2020

I'd be interested in this as I don't currently want to deploy another DC and right now while I have consul-server being accessed outside K8s, the pod IPs are not stable and I'm a bit worried about that longer term.

@lkysow
Copy link
Member

lkysow commented Jun 23, 2020

Hey everyone, I don't think this PR makes sense anymore now that we have Consul 1.8.0 which supports federation through mesh gateways (https://www.consul.io/docs/k8s/installation/multi-cluster/overview). The only reason (that I know of) to expose each server individually as a service is for the federation use-case which is now solved through mesh gateways.

@chancez can you explain more about your use case? Why do you need the consul servers to be accessed externally? And could this maybe be solved by having a single service front the servers rather than one service per server? Thanks!

@lkysow lkysow added the waiting-on-response Waiting on the issue creator for a response before taking further action label Jun 24, 2020
@chancez
Copy link
Contributor

chancez commented Jun 24, 2020

@lkysow I'm mostly trying to get a bare minimum consul server setup in EKS with clients on-premise. I don't want to go through and setup an on-premise consul server cluster. Right now it's actually working because I've exposed the consul ports on the worker nodes so that the pods in EKS can be accessed on prem.

Would mesh gateways allow me to funnel all of the on prem client-agents to the servers in EKS, without a local consul server cluster on prem?

@lkysow
Copy link
Member

lkysow commented Jun 25, 2020

Okay thanks that really helps. So basically you're using a form of #332 to expose the server ports on node ips.

Mesh gateways will not help with your use-case because they're only for datacenter-to-datacenter communication, i.e. server-to-server, not client-to-server.

I don't think this PR is a good solution for you in this case because the lan ip would need to be the load balancer ip and because cloud lb's are expensive. Instead I'd you continue with how you've set things up and we'll look to get that other PR merged.

@lkysow lkysow closed this Jun 25, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
area/multi-dc Related to running with multiple datacenters enhancement New feature or request waiting-on-response Waiting on the issue creator for a response before taking further action
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants