-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] add support for keepalived
#68
[WIP] add support for keepalived
#68
Conversation
Fix newline issue for load balanced TCP services
* commit 'adf314b89561d7ae9bdf8f47503c8ea4accff776':
Hi @Kosta-Github , why do you need VIP and keepalived ? |
From within the cluster the Since our infrastructure doesn't support This way my PR #59 allows to still access the different services from the outside by their corresponding service names ( |
We use HW for SSL interruption, so we can plug behind them HAproxy etc..
moreover you can run consul agent on your apache boxes and generate apache configuration more dynamically. And use mode proxy balancer. |
I am up for this kind of setup: https://thejimmahknows.com/high-availability-using-haproxy-and-keepalived/ This works pretty nice for us for the past week now. I don't want to add another load balancer in front of that, since that would become a single-point-of-failure again. |
I would not recommend using the same haproxy for both internal and public facing services. Once you make your HAproxy accessible from the outside, anybody can access any of your services by manipulating the Host Header. I would do as @sielaq suggested. Run separate HAproxys (or nginx, varnish, apache httpd, whatever) that only give access to your public facing services and use keepalived for HA. |
Ok, sorry, I wasn't tat clear enough: with outside of the cluster I still mean from inside of the company's intranet. For traffic from the internet there are more systems around, doing auth, SSL termination, ... But those system should not be tightly coupled to the cluster implementations... |
Got it. Here's another idea on this that I've been playing with: |
The problem for me is, that I cannot change the company-wide DNS settings in that way. I already had a fight with DevOps to allow mapping all DNS queries And again, this is for the company intranet, not for internet accessibility. |
Got it. In my case I'm the DevOps dude :-) What I had in mind wasn't wildcard A-records but dns delegation (configure DNS servers to use consul DNS interface to resolv all *.consul.intern.mycompnay.com). But this of course requires cooperation from your DNS admin. |
…stname of the panteras container
… image Set the env variable `PANTERAS_RESTART` to `no` (default), `on-failure`, or `always`; see: https://docs.docker.com/reference/commandline/run/#restart-policies
1. create a `unique ID` for each request 2. inject this ID into the HTTP headers as `X-Unique-ID` 3. append this ID to the HTTP log format
{{env "KEEPALIVED_VIP"}} # the virtual IP | ||
} | ||
unicast_peer { # IP addresses of all other peer nodes | ||
{{range nodes}}{{$n := .}}{{if ne $n.Address $node.Address}}{{$n.Address}} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would do:
{{range service "consul"}}{{$n := .}}{{if ne $n.Address $node.Address}}{{$n.Address}}
{{nodes}} contains also slaves, are you running keepalived on every consul host ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good catch; I will change that...
in order to limit peer list to the nodes running a `consul agent`
cool; thanks for merging! |
This is
work-in-progress
(but actually seems to work so far for me).I would like to use
keepalived
for this functionality (from the docs):This allows you to specify a
virtual IP
and connect thatvirtual IP
to one of the nodes running theHAProxy
in the cluster. If that node is not reachable anymore it switches automatically to another node in the cluster. This is probably similar to AWS'selastic IP
, but I am unfamiliar with that, since I cannot use AWS for various reasons.The question is: would you be interested in integrating this functionality into your technology stack? If so, I would add something to the
README.md
as well and do some more testing.This functionality paired with my last PR #59 provides you with a nice highly available load balancer mechanism.