Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Termination Protection for Kops clusters in AWS #490

Closed
kris-nova opened this issue Sep 22, 2016 · 18 comments
Closed

Termination Protection for Kops clusters in AWS #490

kris-nova opened this issue Sep 22, 2016 · 18 comments

Comments

@kris-nova
Copy link
Member

@kris-nova kris-nova commented Sep 22, 2016

We have a use case to enable termination protection in AWS for components created with Kops.
Primarily:

  • EC2 Instances
  • VPC
  • ASG

We would like to have an ability to deploy a termination protected cluster. All (or as many as possible) components of the cluster should be created with termination protection on in AWS.

@justinsb what are your thoughts?
@chrislovecnm what about you?

@mrichmon
Copy link

@mrichmon mrichmon commented Sep 22, 2016

Presumably this would be useful as some separate options:

  • enable termination protection on my master nodes
  • enable termination protection on my worker nodes

My thinking is that cluster size will be varied frequently based on workload. And the cluster should be able to recover loss of a few worker nodes.

@chrislovecnm
Copy link
Member

@chrislovecnm chrislovecnm commented Sep 22, 2016

@mrichmon exactly. Also it is more that just the nodes, VPC, and the ASG.

@kris-nova
Copy link
Member Author

@kris-nova kris-nova commented Sep 23, 2016

So pending @mrichmon's use case (and ours as well @chrislovecnm) - I think the big decision that needs to be made here (other than this feature in general) is :

Do we have term protection as a global flag affecting all components of the cluster?

-or-

Do we allow for component specific configuration, yes for this, no for that (IE: Master/Workers)

@chrislovecnm
Copy link
Member

@chrislovecnm chrislovecnm commented Sep 23, 2016

In terms of doing upgrades or other General ops. It probably would make sense to break up masters vs nodes. @justinsb any opinion on this?

@justinsb justinsb added this to the 1.3.1 milestone Sep 24, 2016
@kris-nova
Copy link
Member Author

@kris-nova kris-nova commented Oct 15, 2016

In the name of getting this feature through, and keeping it flexible I think we should move forward with node/master flags for term protection. @chrislovecnm can we get a priority on this - feel free to assign it to me and I can start banging it out.

@justinsb justinsb modified the milestones: 1.5.0, 1.5 Dec 28, 2016
@jmound
Copy link

@jmound jmound commented Jan 13, 2017

I think EBS volume "Delete on termination" should be set to false by default. From what I see, it is currently set to true.

@kris-nova
Copy link
Member Author

@kris-nova kris-nova commented Jan 13, 2017

@jmound I think there are 2 issues here... #1 Termination protection.. and #2 Delete on termination EBS volumes

I think we might need to open up a second issue for #2 (although there might be one already)

@jmound
Copy link

@jmound jmound commented Jan 17, 2017

  • I agree that they're two different issues, I was trying to add the second here. I've opened up a separate issue here #1516
@fejta-bot
Copy link

@fejta-bot fejta-bot commented Dec 20, 2017

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Jan 19, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Feb 18, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Mar 18, 2020

@leoskyrocker: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@leoskyrocker
Copy link

@leoskyrocker leoskyrocker commented Mar 18, 2020

hmm can we reopen this?

@rifelpet rifelpet reopened this Mar 18, 2020
@rifelpet
Copy link
Member

@rifelpet rifelpet commented Mar 18, 2020

/remove-lifecycle rotten

@rifelpet rifelpet removed this from the 1.5.2 milestone Mar 18, 2020
@fejta-bot
Copy link

@fejta-bot fejta-bot commented Jun 16, 2020

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Jul 16, 2020

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@fejta-bot
Copy link

@fejta-bot fejta-bot commented Aug 15, 2020

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Aug 15, 2020

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
9 participants
You can’t perform that action at this time.