Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Termination Protection for Kops clusters in AWS #490

Closed
krisnova opened this issue Sep 22, 2016 · 19 comments
Closed

Termination Protection for Kops clusters in AWS #490

krisnova opened this issue Sep 22, 2016 · 19 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@krisnova
Copy link
Member

krisnova commented Sep 22, 2016

We have a use case to enable termination protection in AWS for components created with Kops.
Primarily:

  • EC2 Instances
  • VPC
  • ASG

We would like to have an ability to deploy a termination protected cluster. All (or as many as possible) components of the cluster should be created with termination protection on in AWS.

@justinsb what are your thoughts?
@chrislovecnm what about you?

@mrichmon
Copy link

Presumably this would be useful as some separate options:

  • enable termination protection on my master nodes
  • enable termination protection on my worker nodes

My thinking is that cluster size will be varied frequently based on workload. And the cluster should be able to recover loss of a few worker nodes.

@chrislovecnm
Copy link
Member

@mrichmon exactly. Also it is more that just the nodes, VPC, and the ASG.

@krisnova
Copy link
Member Author

So pending @mrichmon's use case (and ours as well @chrislovecnm) - I think the big decision that needs to be made here (other than this feature in general) is :

Do we have term protection as a global flag affecting all components of the cluster?

-or-

Do we allow for component specific configuration, yes for this, no for that (IE: Master/Workers)

@chrislovecnm
Copy link
Member

In terms of doing upgrades or other General ops. It probably would make sense to break up masters vs nodes. @justinsb any opinion on this?

@justinsb justinsb added this to the 1.3.1 milestone Sep 24, 2016
@krisnova
Copy link
Member Author

In the name of getting this feature through, and keeping it flexible I think we should move forward with node/master flags for term protection. @chrislovecnm can we get a priority on this - feel free to assign it to me and I can start banging it out.

@justinsb justinsb modified the milestones: 1.5.0, 1.5 Dec 28, 2016
@jmound
Copy link

jmound commented Jan 13, 2017

I think EBS volume "Delete on termination" should be set to false by default. From what I see, it is currently set to true.

@krisnova
Copy link
Member Author

@jmound I think there are 2 issues here... #1 Termination protection.. and #2 Delete on termination EBS volumes

I think we might need to open up a second issue for #2 (although there might be one already)

@jmound
Copy link

jmound commented Jan 17, 2017

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 20, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@leoskyrocker: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@leoskyrocker
Copy link

hmm can we reopen this?

@rifelpet rifelpet reopened this Mar 18, 2020
@rifelpet
Copy link
Member

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Mar 18, 2020
@rifelpet rifelpet removed this from the 1.5.2 milestone Mar 18, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 16, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 16, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@SCLogo
Copy link

SCLogo commented Jan 12, 2021

it would be good if we could set mfa or password or something on kops side, so we could protect cluster and configs from accidental delete

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

10 participants