-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Single zone, multiple masters #732
Comments
The way it currently works on AWS is by spinning up 4 auto scaling groups. The first three are just 1 master per AZ and the final one is for the nodes (which seem to randomly distribute magically for me) I haven't tried it with less than three but when you run kops you can explicitly specify the zones desired... My assumption was this effected the nodes InstanceGroup also? If not then agree it's not a bad idea, I hope masters within the same zone should always keep consensus right :-) |
@brandoncole thoughts on this |
@chrislovecnm @watkinsv-hp I think there's merit in this request - perhaps not from a HA perspective but from how to handle node failures gracefully within one availability zone if that's all you're looking for. It's not uncommon to run cluster mirrors as the HA strategy with higher level load balancing. Sounds like the proposal here would be to have something like:
As the syntax for running multiple masters in the same zone? |
Thanks for the discussion! As a user to kops, I would find a combination of I understand the complexity added by |
I was thinking that we should in fact specify instancegroups for the etcd clusters, rather than zones. It would be much more precise, and cope with clouds where the zones concept doesn't necessarily apply. |
Been stewing on the idea of specifying instancegroups for the etcd clusters rather than zones. I like it more and more. Decoupling etcd from the masters makes a lot of good sense and really minimizes the need for any kind of redundancy for the masters (in light of autoscaling groups respawning masters KIA). If there were an instancegroup for the etcd cluster and only 1 master, a master would be at worst unreachable for moments with no manual intervention assuming normal cloud shenanigans which is really what I'm after here... plus managing individual kubernetes components becomes much easier. |
Yah that is an issue for me architecturally and for upgrades ;( I know this is a big debate, but I like to keep it as simple as possible. We have been managing multiple components on servers for years, the cost and complexity concern me. We are managing noisey neighbors on nodes all the time, How is managing etd any different. I would have to have my mind changed significantly. So go, change my mind, tell me I am wrong!!! |
Oh and Kelsey started this, none of the Google installs have ever done this. |
@watkinsv-hp open another issue for me if you want to discuss breaking out the etcd servers. |
Not sure if it deserves a separate issue, but if I want to set up a HA cluster in an AWS region with only two AZ this is not possible now. I would still want a HA etcd quorum (3+ nodes) and some zone separation if possible. Perhaps the #772 would be the place to go for this/my issue, but I would still want two masters, one in each zone. |
That is actually a good use case ;) |
Having this feature would make it possible for my organisation to use kops, without it we can't. Here's our use case:
As it is, kops is unusable for us due to this limitation. @watkinsv-hp's suggestion from Oct 27 would solve this nicely. |
If anyone wants to design and implement this we would welcome the PR!! |
I took a look, unfortunately it was my first time ever looking at Go, so I didn't have much luck, but some notes for the next person: I think the relevant code is around line 442 of pkg/apis/kops/cluster.go . |
So the first place I would start if you are new to go, is maybe writing an e2e test for it. Test it then write it ;) We have office hours tomorrow. If u want to swing by, ping me on slack |
kops 1.5 will support (but not encourage!) this sort of configuration :-) |
@justinsb means this the today released alpha version of kops 1.5 should support multi master setup in only 2 AZs? for Example:
|
@jbrunk1966 Unfortunately still same error :-( I am waiting on this as well. |
@MilanDasek my GO oriented workmate checked the code of 1.5.0-alpha3 and it seems the code part for this feature is still missing ... |
Is there any way how to add masters to the cluster to other zones once cluster is created? Thanks M |
I'm not sure but it should be possible (although the process isn't well documented) What might be easier is deploying to two AZ's, then once it's up you can scale one of the "ig" s (auto scaling group) up. See "kops get ig" then edit the target with "kops edit ig foo" |
@starkers and how you deploy master to 2 AZ's? When I try to fire --master-zones eu-central-1a,eu-central-1b script tells me it is not possible to deploy only 2 masters, when I try --master-zones eu-central-1a,eu-central-1b,eu-central-1a, it tells me it detected another master in zone eu-central-1a. So it must be done manually after the cluster is created, but how? |
We have a couple issues and PRs open about this. Work in progress |
+1 for this issue. I am bound by the rules and limitations of my organizations policies. The shared VPC I am using will only allow 2 AZs. It would be great if I can spread three master instances across two AZs. |
Regarding to 1.5.0-alpha4:
Need assistance please, what do we have how to configure? |
I have also tried 1.5.0-alpha4 .. and the only one thing that I saw is that we are able to edit instacegroup for every master .. and to put min/max to something different than 1 , but in that case these extra masters does not join the cluster. |
This should be working if you set master zones to |
@chrislovecnm noep this still doesnt work:
results in:
|
As I understand from Justin, we have to run kops edit cluster .. and then to edit subnets to be the same. |
Guys, Can you please help me and write me exactly what I need to change to get 3 masters in one zone? kops edit ig master-eu-central-1a if it is ================================== Thanks. |
Adding documentation label.. this is a highly visible, and highly requested feature.. We need docs on this ASAP (with an example).. I will see if I can't get a working example into a markdown file shortly.. CC @evildandelions @chrislovecnm @justinsb @geojaz in case any of you can offer an example here before I can 😄 |
I want to ask .. is there any option for kops create cluster that will do all these things with masters/etcd members? |
Here are step by step instructions to run 3 masters in single AZ:
This way all instances will be created in one zone. The master groups names are confusing though. |
@kamilhristov I dont think that this explanation is true, because you have to configure etcd members to be in the same AZ as well. Waiting for @justinsb to provide latest official way for doing this. |
@airstand the etcd members configuration is using the instance group name as reference:
Thus, the etcd members AZ match the instance group AZ. I am not sure that this is the "correct" way, but at least it is working. |
@kamilhristov - a bit rhetorical of me, but could you do this in eu-central-1 instead? :-) We will need to spin up Kubernetes clusters in regions with only two zones. Hopefully this will be working by the time kops 1.5 is released. If not we have to start looking for alternative ways of bootstrapping. Hope it doesn't come to that since kops makes things so easy. |
@olvesh for regions with 2 zones, the cluster configuration has to be created from yaml in order to by-pass the create cluster cmd validation. I have created example manifest with minimal configuration: You need to run Then create the SSH public key secret:
After that you should be able to populate the cluster resources with |
Hello, thanks for the info. Unfortunately after I fire W0129 10:09:16.817497 194 apply_cluster.go:635] unable to find version information for kops version "1.5.0-beta1" in channel which is probably being solved here: #1667 Anyway my yaml is attached - creating 3 masters (2 in same zone and 3rd in another one) I hope it will work once "version information" is solved. Or is there any workaround? |
Do we have a solution for this yet.. We are being forced to move out of kops and look out for other solutions. Kindly help with an approach which will work on this requirement. |
We used @kamilhristov solution, but it is not optimal since we opt out of the simplicity of kops cli. I started with a single master cluster and edited it afterwards to change it to multiple masters. The main issue is the UX part in the kops cli - not sure if it is hard to fix? I guess the guys most competent in the kops code rarely use the "rural" datacenters with fewer than 3 availability zones. ;-) |
This should be fixed!
You can also:
I think we can close this issue unless anyone has any use cases not covered? |
Closing - please request to reopen or open it. @justinsb do we have a docs issue? |
I have had a lot of problems with this. I do not think this works. The masters in the same zone have a race condition around mounting their etcd volumes. When creating a new cluster, I get terminated masters... then they come back up - and eventually I get steady state. But the mounts are all wrong. master1 might have master2's etcd events volume, but its own ectd main volume. If both go down - and both come back up - they will once again contend for the volumes in the same zone - and there is no guarantee that they will get the ones they had before. Things go bad quick. Multi-master across 1 or 2 zones is an important use case. This issue needs to be reopened. |
Well - after lots of manual terminations and testing - I guess it does work. Cluster always comes back ok - even with the race on volumes - and even though masters in the same AZ might mount different volumes at different times. Seems very odd. Can someone shed some light on this and let me know if this all ok? |
We would like to make multiple masters in a single availability zone when using the cloud provider AWS using kops. While #661 has some great discussion around single-zone vulnerability versus true high-availability, we have a slightly different use case where we want to spread individual clusters around multiple availability zones instead of spreading a single cluster across multiple availability zones. In this way, it's easier for us to specify a desired outcome when spreading a deployment across multiple availability zones and multiple regions using the federation annotations (the idea is more control over both desired zones and regions when having cluster-per-zone in multiple regions).
How do y'all feel about multiple masters in a single availability zone when viewed from this perspective? Is this a topic worth discussing? Should I go catch up on some other previous conversations? Thanks!
The text was updated successfully, but these errors were encountered: