Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ALB v0.8.x and clustering related options #17

Closed
fititnt opened this issue Nov 23, 2019 · 6 comments
Closed

ALB v0.8.x and clustering related options #17

fititnt opened this issue Nov 23, 2019 · 6 comments

Comments

@fititnt
Copy link
Owner

fititnt commented Nov 23, 2019

The ALB v0.7.4-beta was just relased and I will give a try to dedicate the ALB v0.8.x to implement clustering related options with the following limitations:

  1. The implementation should not require floating IPs
    1. Most of well know and very efficient HA setups actually require this (like keepalived). The type of audience and VPSs prices we're targeting does not
      1. To put in perspective can be cheaper to spin a FOUR 8GB RAM VPS on some providers than just pay for one ALB on AWS to route for the VPSs (not included in the price)
  2. The implementation should not require additional disks on complex, hard to automate, disk formatting on the main disk
    1. DRBD requires this, so we will have to look for alternatives.

To avoid too much overthink, these are allowed:

  1. The implementation could require a minimum number of targets bigger than 2
    1. Some providers that users pay for floating IPs could in theory allow just 2 on some specific cases
  2. The implementation could (at first moment) require a very specific number of hosts
    1. 3 is a good number.
  3. The implementation is free to decide what strategy to use if it means be easier to setup and maintain in long term and with less human intervention.
    1. So is free if will choose Active/Active, Master/slaves,
      Active/Standby
    2. For data that could be recreated on demand (like Let's Encrypt secrets) the implementation does not need to be ACID compliant.
      1. In the worst case scenario a brain-split should not put down the cluster of Load Balancers.

Maybe I will not be able to even make an MVP of this in a way that could be automated and released on ALB working out of the box, but this issue is about give a try.

@fititnt fititnt pinned this issue Nov 23, 2019
@fititnt
Copy link
Owner Author

fititnt commented Nov 23, 2019

I think we will need to implement some way to allow 2 (3 if we consider both mdes) types of restricted modes when using the AP-ALB role:

  • Unrestricted (run all types)
  • Apps only rules
  • Infrastructure only (all rules, except the specific for apps)

The reason for this is some tasks (like changing the HAProxy rules, changing default OpenResty default pages, etc) after good initial setup tent to be not frequent. But Apps only rules could be very, very active.

With this mode, maybe the changes to break things (or at least give extra speed bost) could be huge improved. This also could allows usage of AP-ALB for less skilled people on daily usage

fititnt added a commit to fititnt/ansible-linux-ha-cluster that referenced this issue Nov 23, 2019
fititnt added a commit to fititnt/ansible-linux-ha-cluster that referenced this issue Nov 24, 2019
fititnt added a commit to fititnt/ansible-linux-ha-cluster that referenced this issue Nov 24, 2019
…rsions from subdirectories (using Roles installed on the system) and the root folder (using roles from roles/* local folder)(refs fititnt/ap-application-load-balancer#17)
fititnt added a commit that referenced this issue Nov 24, 2019
…: created haproxy-standard.cfg.j2 from haproxy-minimal.cfg.j2
fititnt added a commit that referenced this issue Nov 24, 2019
…nlb_raw_defaults & nlb_raw_bottom; draft of internal variables nlb_listen_openresty_safetoenable & nlb_listen_redis_safetoenable
@fititnt
Copy link
Owner Author

fititnt commented Nov 24, 2019

Life is much, much much more easier when is available private network at cloud provider level and not via software defined networks

fititnt added a commit to fititnt/ansible-linux-ha-cluster that referenced this issue Nov 25, 2019
fititnt added a commit that referenced this issue Nov 25, 2019
…: new folder convention for store HAProxy (and 'NLB strategies')
@fititnt
Copy link
Owner Author

fititnt commented Nov 27, 2019

With #22 it implicitly means that or I should setup some way to synchronize files with Consul watching for changes or literally lean a new scripting language, Lua, and write extension to https://github.com/GUI/lua-resty-auto-ssl, just to not use the Redis.

Not that I would not like to eventually lean Lua, but it will be sooner than I expected

@fititnt
Copy link
Owner Author

fititnt commented Dec 4, 2019

I think eventually will create a new storage driver for lua-resty-autossl just to support Etcd as extra alternative to Consul if I manange to have time.

Kubernetes uses Etcd (and is not likely to support different pluggable storages soon, see Pluggable storage backends (was Support for Consul K/V storage) kubernetes/kubernetes#1957). Even if Etcd lack of some features of Consul, one old benchmark here https://coreos.com/blog/performance-of-etcd.html is not that bad. So is more a case of how easy would be to create one storage driver. And this is mostly likely related if someone else already created one lua abstraction for this (so I would not need to create a library from scratch)

@fititnt
Copy link
Owner Author

fititnt commented Dec 6, 2019

I think I will rewrite the ap-application-load-balancer already on the v0.8.x to run not only on the Debian/Ubuntu family, but also on RHEL/CentOS. This actually may require less efort on very short term than if I have to rewrite or create from scratch one Ansible Role to run Galera Cluster.

The demo repository fititnt/ap-alb-cluster-demo already was renamed to fititnt/ansible-linux-ha-cluster and considering the already implemented options with the #22 and $29, having at least one database on full automated installation with roles that actually could be used on non-stop on production worth the effort.

And of course I have to implement these features on some real clients, and maybe deliver one already fully HA (with exception to shared storage, that is slow via software), would be better than I expect initially.

But oh god, what I was thinking the v0.8 would be like one week (already is 2) is likely to go maybe even more one week. But if #31 also works, this would make the apps from different clusters relatively easy to import/export, even without using Docker/Kubernets.

@fititnt
Copy link
Owner Author

fititnt commented Dec 24, 2019

Done

Same comments from #34 (comment) and #34 (comment) apply here.

@fititnt fititnt closed this as completed Dec 24, 2019
@fititnt fititnt unpinned this issue Dec 24, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant