New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kops Create Master HA with 2 AZ AWS Singapore #3088

Closed
abdidarmawan007 opened this Issue Jul 29, 2017 · 5 comments

Comments

Projects
None yet
5 participants
@abdidarmawan007

abdidarmawan007 commented Jul 29, 2017

Hi all,can im set Kops create HA Master Kubernetes in singapore regions with only 2 AZ ?
Example like this
--zones=ap-southeast-1a,ap-southeast-1b --master-zones=ap-southeast-1a,ap-southeast-1b

@abdidarmawan007 abdidarmawan007 changed the title from Kops Create Master HA with 2 Zone AWS Singapore to Kops Create Master HA with 2 AZ AWS Singapore Jul 29, 2017

@eedugon

This comment has been minimized.

Show comment
Hide comment
@eedugon

eedugon Jul 29, 2017

eedugon commented Jul 29, 2017

@msvbhat

This comment has been minimized.

Show comment
Hide comment
@msvbhat

msvbhat Jul 31, 2017

Contributor

I agree with @eedugon, you shouldn't create a etcd cluster with 2 systems. It is better to have odd number of masters so that etcd cluster will have odd number of systems.

But for your case, you can specify --master-count=3 and --master-zones=ap-southeast-1a,ap-southeast-1b. I haven't tested this myself, but in theory it should work. (May not be advisable too, but should work ).

Contributor

msvbhat commented Jul 31, 2017

I agree with @eedugon, you shouldn't create a etcd cluster with 2 systems. It is better to have odd number of masters so that etcd cluster will have odd number of systems.

But for your case, you can specify --master-count=3 and --master-zones=ap-southeast-1a,ap-southeast-1b. I haven't tested this myself, but in theory it should work. (May not be advisable too, but should work ).

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jan 2, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot commented Jan 2, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Feb 7, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented Feb 7, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Mar 9, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

fejta-bot commented Mar 9, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment