Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Single zone, multiple masters #732

Closed
watkinsv-hp opened this issue Oct 26, 2016 · 44 comments
Closed

Single zone, multiple masters #732

watkinsv-hp opened this issue Oct 26, 2016 · 44 comments

Comments

@watkinsv-hp
Copy link

We would like to make multiple masters in a single availability zone when using the cloud provider AWS using kops. While #661 has some great discussion around single-zone vulnerability versus true high-availability, we have a slightly different use case where we want to spread individual clusters around multiple availability zones instead of spreading a single cluster across multiple availability zones. In this way, it's easier for us to specify a desired outcome when spreading a deployment across multiple availability zones and multiple regions using the federation annotations (the idea is more control over both desired zones and regions when having cluster-per-zone in multiple regions).

How do y'all feel about multiple masters in a single availability zone when viewed from this perspective? Is this a topic worth discussing? Should I go catch up on some other previous conversations? Thanks!

@starkers
Copy link

The way it currently works on AWS is by spinning up 4 auto scaling groups.

The first three are just 1 master per AZ and the final one is for the nodes (which seem to randomly distribute magically for me)

I haven't tried it with less than three but when you run kops you can explicitly specify the zones desired... My assumption was this effected the nodes InstanceGroup also? If not then agree it's not a bad idea, I hope masters within the same zone should always keep consensus right :-)
(...even if they're unreachable)

@chrislovecnm
Copy link
Contributor

@brandoncole thoughts on this

@brandoncole
Copy link

@chrislovecnm @watkinsv-hp I think there's merit in this request - perhaps not from a HA perspective but from how to handle node failures gracefully within one availability zone if that's all you're looking for. It's not uncommon to run cluster mirrors as the HA strategy with higher level load balancing.

Sounds like the proposal here would be to have something like:

--master-zones=us-east-1b,us-east-1b,us-east-1b

As the syntax for running multiple masters in the same zone?

@watkinsv-hp
Copy link
Author

Thanks for the discussion!

As a user to kops, I would find a combination of --num-masters=N and --master-zones=us-east-1b more intuitive. Consider the eu-central-1 region in AWS where there are only two availability zones; it would be much easier to combine --num-masters=3 --master-zones=eu-central-1a,eu-central-1b to spread 3 masters across 2 or --num-masters=3 --master-zones=eu-central-1b for one cluster and (in another run) --num-masters=3 --master-zones=eu-central-1a for users who fall into the single AZ camp.

I understand the complexity added by --num-masters=N with regard to enforcing a sane number (e.g. 1 or 3 or 5) and figuring out where to put an extra master where there are only 2 zones.

@justinsb
Copy link
Member

I was thinking that we should in fact specify instancegroups for the etcd clusters, rather than zones. It would be much more precise, and cope with clouds where the zones concept doesn't necessarily apply.

@watkinsv-hp
Copy link
Author

Been stewing on the idea of specifying instancegroups for the etcd clusters rather than zones. I like it more and more. Decoupling etcd from the masters makes a lot of good sense and really minimizes the need for any kind of redundancy for the masters (in light of autoscaling groups respawning masters KIA).

If there were an instancegroup for the etcd cluster and only 1 master, a master would be at worst unreachable for moments with no manual intervention assuming normal cloud shenanigans which is really what I'm after here... plus managing individual kubernetes components becomes much easier.

@chrislovecnm
Copy link
Contributor

Yah that is an issue for me architecturally and for upgrades ;( I know this is a big debate, but I like to keep it as simple as possible. We have been managing multiple components on servers for years, the cost and complexity concern me. We are managing noisey neighbors on nodes all the time, How is managing etd any different. I would have to have my mind changed significantly. So go, change my mind, tell me I am wrong!!!

@chrislovecnm
Copy link
Contributor

Oh and Kelsey started this, none of the Google installs have ever done this.

@chrislovecnm
Copy link
Contributor

@watkinsv-hp open another issue for me if you want to discuss breaking out the etcd servers.

@olvesh
Copy link
Contributor

olvesh commented Nov 18, 2016

Not sure if it deserves a separate issue, but if I want to set up a HA cluster in an AWS region with only two AZ this is not possible now. I would still want a HA etcd quorum (3+ nodes) and some zone separation if possible.

Perhaps the #772 would be the place to go for this/my issue, but I would still want two masters, one in each zone.

@chrislovecnm
Copy link
Contributor

That is actually a good use case ;)

@weaseal
Copy link

weaseal commented Dec 7, 2016

Having this feature would make it possible for my organisation to use kops, without it we can't. Here's our use case:

  • Our regions are already established since we are extending our existing architecture by adding kubernetes: I have to use us-west-1.
  • us-west-1 has 3 AZs, but one of them is 100% full, meaning AWS is not allowing any additional resources to be added there (been this way for months - not a short-term issue), so only us-west-1b and us-west-1c are usable.
  • kops fails to create a multi-master setup since AWS spews errors when kops tries to do stuff in us-west-1a.
  • If I could do something like "--master-zones=us-west-1b,us-west-1b,us-west-1c" (giving me 2 masters in 1 AZ and 1 master in the other) then we could use kops.

As it is, kops is unusable for us due to this limitation. @watkinsv-hp's suggestion from Oct 27 would solve this nicely.

@chrislovecnm
Copy link
Contributor

If anyone wants to design and implement this we would welcome the PR!!

@weaseal
Copy link

weaseal commented Dec 8, 2016

I took a look, unfortunately it was my first time ever looking at Go, so I didn't have much luck, but some notes for the next person: I think the relevant code is around line 442 of pkg/apis/kops/cluster.go .

@chrislovecnm
Copy link
Contributor

chrislovecnm commented Dec 8, 2016

So the first place I would start if you are new to go, is maybe writing an e2e test for it. Test it then write it ;)

We have office hours tomorrow. If u want to swing by, ping me on slack

@justinsb
Copy link
Member

kops 1.5 will support (but not encourage!) this sort of configuration :-)

@justinsb justinsb added this to the 1.5.0 milestone Dec 28, 2016
@jbrunk1966
Copy link

@justinsb means this the today released alpha version of kops 1.5 should support multi master setup in only 2 AZs?

for Example:

--master-zones="eu-central-1a,eu-central-1b,eu-central-1a"

@MilanDasek
Copy link

@jbrunk1966 Unfortunately still same error :-( I am waiting on this as well.

@jbrunk1966
Copy link

@MilanDasek my GO oriented workmate checked the code of 1.5.0-alpha3 and it seems the code part for this feature is still missing ...

@MilanDasek
Copy link

Is there any way how to add masters to the cluster to other zones once cluster is created?
In other words, I create 1 master in eu-central-1a and want to add 2 masters to eu-central-1b.

Thanks

M

@starkers
Copy link

starkers commented Jan 17, 2017

I'm not sure but it should be possible (although the process isn't well documented)

What might be easier is deploying to two AZ's, then once it's up you can scale one of the "ig" s (auto scaling group) up. See "kops get ig" then edit the target with "kops edit ig foo"

@MilanDasek
Copy link

@starkers and how you deploy master to 2 AZ's? When I try to fire --master-zones eu-central-1a,eu-central-1b script tells me it is not possible to deploy only 2 masters, when I try --master-zones eu-central-1a,eu-central-1b,eu-central-1a, it tells me it detected another master in zone eu-central-1a.

So it must be done manually after the cluster is created, but how?
Scaling masters in one zone from 1-2 is simple.

@chrislovecnm
Copy link
Contributor

We have a couple issues and PRs open about this. Work in progress

@gopinatht
Copy link

gopinatht commented Jan 18, 2017

+1 for this issue. I am bound by the rules and limitations of my organizations policies. The shared VPC I am using will only allow 2 AZs. It would be great if I can spread three master instances across two AZs.

@jbrunk1966
Copy link

Regarding to 1.5.0-alpha4:

Multiple masters in the same AZ (by kops edit cluster, currently)

Need assistance please, what do we have how to configure?

@airstand
Copy link

I have also tried 1.5.0-alpha4 .. and the only one thing that I saw is that we are able to edit instacegroup for every master .. and to put min/max to something different than 1 , but in that case these extra masters does not join the cluster.
I am not sure what have to edit in the cluster spec especially for etcd members.

@chrislovecnm
Copy link
Contributor

This should be working if you set master zones to --master-zones=us-east-1b,us-east-1b,us-east-1b for instance.

@jbrunk1966
Copy link

@chrislovecnm noep this still doesnt work:

--master-zones=eu-central-1a,eu-central-1a,eu-central-1a \

results in:

found two master instance groups in subnet "eu-central-1a"

@airstand
Copy link

As I understand from Justin, we have to run kops edit cluster .. and then to edit subnets to be the same.

@MilanDasek
Copy link

Guys,

Can you please help me and write me exactly what I need to change to get 3 masters in one zone?

kops edit ig master-eu-central-1a
change min and max size to 3?

if it is
kops edit cluster
then change exactly what?

==================================
Another topic - is there any ETA when will be possible to create 3 masters cluster with setup like --master-zones=eu-central-1a,eu-central-1a,eu-central-1b ???

Thanks.

@krisnova
Copy link
Contributor

Adding documentation label.. this is a highly visible, and highly requested feature.. We need docs on this ASAP (with an example)..

I will see if I can't get a working example into a markdown file shortly..

CC @evildandelions @chrislovecnm @justinsb @geojaz in case any of you can offer an example here before I can 😄

@airstand
Copy link

I want to ask .. is there any option for kops create cluster that will do all these things with masters/etcd members?

@kamilhristov
Copy link
Contributor

Here are step by step instructions to run 3 masters in single AZ:

  1. Create new cluster configuration by specifying 3 zones:
kops create cluster $NAME \
        --zones eu-west-1a,eu-west-1b,eu-west-1c \
        --master-zones eu-west-1a,eu-west-1b,eu-west-1c
  1. Edit the instance groups spec.subnets to match the desired zone:

kops edit ig master-eu-west-1b --name $NAME

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2017-01-28T15:36:51Z"
  labels:
    kops.k8s.io/cluster: kube.kamilhristov.com
  name: master-eu-west-1b
spec:
  associatePublicIp: true
  image: kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-10-21
  machineType: m3.medium
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - eu-west-1a

kops edit ig master-eu-west-1c --name $NAME

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: "2017-01-28T15:36:51Z"
  labels:
    kops.k8s.io/cluster: kube.kamilhristov.com
  name: master-eu-west-1c
spec:
  associatePublicIp: true
  image: kope.io/k8s-1.4-debian-jessie-amd64-hvm-ebs-2016-10-21
  machineType: m3.medium
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - eu-west-1a
  1. As a result kops get ig --name $NAME should return something similar:
NAME                    ROLE    MACHINETYPE     MIN     MAX     SUBNETS
master-eu-west-1a       Master  m3.medium       1       1       eu-west-1a
master-eu-west-1b       Master  m3.medium       1       1       eu-west-1a
master-eu-west-1c       Master  m3.medium       1       1       eu-west-1a
nodes                   Node    t2.medium       2       2       eu-west-1a

This way all instances will be created in one zone. The master groups names are confusing though.

@airstand
Copy link

@kamilhristov I dont think that this explanation is true, because you have to configure etcd members to be in the same AZ as well.
That one should be done via editing the whole cluster, not master instance groups one by one.

Waiting for @justinsb to provide latest official way for doing this.

@kamilhristov
Copy link
Contributor

@airstand the etcd members configuration is using the instance group name as reference:

  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: eu-west-1a
    - instanceGroup: master-eu-west-1b
      name: eu-west-1b
    - instanceGroup: master-eu-west-1c
      name: eu-west-1c
    name: main
  - etcdMembers:
    - instanceGroup: master-eu-west-1a
      name: eu-west-1a
    - instanceGroup: master-eu-west-1b
      name: eu-west-1b
    - instanceGroup: master-eu-west-1c
      name: eu-west-1c
    name: events

Thus, the etcd members AZ match the instance group AZ.

I am not sure that this is the "correct" way, but at least it is working.

@olvesh
Copy link
Contributor

olvesh commented Jan 28, 2017

@kamilhristov - a bit rhetorical of me, but could you do this in eu-central-1 instead? :-)

We will need to spin up Kubernetes clusters in regions with only two zones. Hopefully this will be working by the time kops 1.5 is released. If not we have to start looking for alternative ways of bootstrapping. Hope it doesn't come to that since kops makes things so easy.

@kamilhristov
Copy link
Contributor

@olvesh for regions with 2 zones, the cluster configuration has to be created from yaml in order to by-pass the create cluster cmd validation. I have created example manifest with minimal configuration:

kube.yaml

You need to run kops create -f kube.yaml which will create the cluster and ig configuration.

Then create the SSH public key secret:

kops create secret --name kube.kamilhristov.com sshpublickey admin -i ~/.ssh/id_rsa.pub

After that you should be able to populate the cluster resources with kops update cluster --yes

@MilanDasek
Copy link

Hello,

thanks for the info. Unfortunately after I fire
kops update cluster %NAME% --yes
I got

W0129 10:09:16.817497 194 apply_cluster.go:635] unable to find version information for kops version "1.5.0-beta1" in channel
W0129 10:09:16.818513 194 apply_cluster.go:702] unable to find version information for kubernetes version "1.4.7" in channel

which is probably being solved here: #1667

Anyway my yaml is attached - creating 3 masters (2 in same zone and 3rd in another one)
create-cluster-template.yaml.txt

I hope it will work once "version information" is solved. Or is there any workaround?

@UlaganathanNamachivayam

Do we have a solution for this yet.. We are being forced to move out of kops and look out for other solutions. Kindly help with an approach which will work on this requirement.

@olvesh
Copy link
Contributor

olvesh commented Apr 25, 2017

We used @kamilhristov solution, but it is not optimal since we opt out of the simplicity of kops cli. I started with a single master cluster and edited it afterwards to change it to multiple masters.

The main issue is the UX part in the kops cli - not sure if it is hard to fix? I guess the guys most competent in the kops code rarely use the "rural" datacenters with fewer than 3 availability zones. ;-)

@justinsb
Copy link
Member

This should be fixed!

kops create cluster --zones=us-east-1c --master-count=3 k8s.example.com will create 3 masters all in us-east-1c

You can also:

  • kops create cluster --zones=us-east-1b,us-east-1c --master-zones=us-east-1b,us-east-1c --master-count=3, it will round-robin around the master zones, so it will so pick b,c,b

I think we can close this issue unless anyone has any use cases not covered?

@chrislovecnm
Copy link
Contributor

Closing - please request to reopen or open it. @justinsb do we have a docs issue?

@michaelajr
Copy link

michaelajr commented Apr 22, 2020

I have had a lot of problems with this. I do not think this works. The masters in the same zone have a race condition around mounting their etcd volumes. When creating a new cluster, I get terminated masters... then they come back up - and eventually I get steady state. But the mounts are all wrong. master1 might have master2's etcd events volume, but its own ectd main volume. If both go down - and both come back up - they will once again contend for the volumes in the same zone - and there is no guarantee that they will get the ones they had before. Things go bad quick.

Multi-master across 1 or 2 zones is an important use case. This issue needs to be reopened.

@michaelajr
Copy link

michaelajr commented Apr 22, 2020

Well - after lots of manual terminations and testing - I guess it does work. Cluster always comes back ok - even with the race on volumes - and even though masters in the same AZ might mount different volumes at different times. Seems very odd. Can someone shed some light on this and let me know if this all ok?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests