-
Notifications
You must be signed in to change notification settings - Fork 39.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes on AWS mutli-AZ by default #13063
Kubernetes on AWS mutli-AZ by default #13063
Conversation
Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist") If this message is too spammy, please complain to ixdy. |
It occurs to me where this will not work is EBS disks... Since EBS volumes cannot be moved between environments, this will be tricky. Thoughts are welcome.. I wonder if this is something that has been dealt with on other providers. I'd be tempted to modify the scheduler such that it would prefer to keep a pod which relies on an awsElasticBlockStore volume in the same AZ unless it cannot fit, in which case it could perhaps snapshot the disk into the other AZ? That seems clunky (I wouldn't want extended pod relocation time due to a very very large snapshot)... |
Assigned to @justinsb |
Labelling this PR as size/L |
IMO having primary and secondary AZ is Not Enough (tm). Proper multi-AZ highly available cluster should span >2 zones to enable quorum-type service deployments (redis sentinels for example) and properly handle network partitions. See also my comment on the linked issue. We either have HA setup (starting from kubernetes components themselves, like etcd nodes spanning >2 zones) or we stick to one-cluster-per-zone to not give users false assumptions about availability and fault tolerance. |
Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist") If this message is too spammy, please complain to ixdy. |
+1 to what @zytek said above. |
cc @quinton-hoole @justinsb is this going to get a review? |
Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist") If this message is too spammy, please complain to ixdy. |
1 similar comment
Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist") If this message is too spammy, please complain to ixdy. |
Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist") If this message is too spammy, please complain to ixdy. |
This appears to be stalled |
Hello!
References issue #13056 - Kubernetes on AWS via "kube-up" should be more highly available.
Part one: Minions should scale across two Amazon Availability zones
Part two: Kube-master should be highly available with one instance per AZ
This is part one of two.
This has a fairly high potential to break things - but I've done quite a bit of testing and it seems to behave just fine. Since the two AZs exist in one VPC and networking is flat between them, I don't expect any issues.
Additionally, this replaces the subnet which was a single 0/24 with two /24s in the larger /16 of the VPC. Therefore the total number of minions on AWS without any modification is upped from 254 to 508 (two /24s).
@justinsb should probably take a peek at this one :)
Thanks everyone! We're absolutely loving Kubernetes on my team - Keep up the great work!!