2 Mailu servers for redundancy - Best approach #177
We have two Mailu instances being set up. One in house, one in the cloud to act as a back up.
Is DNS MX the best approach or are there others that are preferable?
I haven't seen anything here on this issue, what are others doing?
The text was updated successfully, but these errors were encountered:
Im all cloud based at the moment but am currently implementing a high avaliability setup using HA Proxy. It has the ability todo failover routing and supports smtp / dovecot. Still have a few bugs to workout in my config but HA proxy is working with starttls support (ssl is semi working but need to sortout a dovecot auth issues as the HA Proxy is doing ssl termination and confusing dovecot). Postfix is relaying outgoing email through amazon ses to help ensure client email reaches is destination as ive had issue with ec2 ip reputation before. I should note the entire setup works and is easier with an ELB (ssl termination even working) but it doesnt seem to support a method of restoing original ip to dovecot/postfix (which ha proxy does)
We would like to reconfigure our infrastructure to take advantage of this now.
Broadly we're try to address 2 issues with this. Deliverability & Resilience.
Ideally we want our primary backend to reside in our DMZ for security, control, cost etc. We have a KVM cluster with ZFS, RAID, Ceph on the way.
Even though our public IP space is classed as commercial, sometimes foreign networks have it classed as residential adsl with a poor reputation as their data is out of date, there is also a 3 day turnaround from the ISP on adding, removing or changed PTR records.
We now have two separate servers with hosting companies here and one as a test server in the DMZ, we would like to migrate them all to one domain.
We also have had a lot of political interference with hosting companies abroad. they just block SMTP with no warning or notice.
So being able to route mail from a different IP easily, migrate hosts quickly & easily is important.
We are also in a rapidly developing environment that sees real impact of climate change. Occasional power loss from accidents or upgrades to transmission or flooding can't be ruled out.
Being able to switch to a secondary backend for up to 6 hours or so would be ideal.
We were planing on using Portainer to monitor them unless someone can suggest something better.
Anyway, that's a quick run down on the kinds of challenges we face and what we want to address, so you understand where we are coming from.
Wondering how we can get to a situation where we can test a solution and what help we can give to the project to get there.
I've been playing Docker Swarm on 3 cheap VPS hosts. Docker has it's own routing mesh for traffic distribution / HA. If one of the containers goes down, the user does not feel anything. (Not yet in production, working on it)
For redundant storage, I'm using GlusterFS in a triple-replicated, using data volumes from the same hosts. Since the hosts are cheap, write performance sometimes seem to suffer a bit.
If I will get to the point of having a serious user-base, I'll be just a matter of adding worker nodes to the swarm and glusterFS pool and that's all there is to do for scale ability.