Skip to content
This repository has been archived by the owner on Sep 30, 2020. It is now read-only.

Running 'kube-aws destroy' in the wrong folder could destroy the wrong cluster #249

Closed
whereisaaron opened this issue Jan 13, 2017 · 11 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. open for pull request

Comments

@whereisaaron
Copy link
Contributor

Nope, I haven't done that 馃槃 . But I worry about doing it a lot...

Feature request 1: kube-aws destroy should display the clusterName and prompt for the user to type the name of the cluster to confirm the deletion. People who need to script a destroy can 'echo "cluster-name" | kube-aws destroy".

I don't think a similar check is necessarily needed for kube-aws node-pool destroy as it at least requires you specify --node-pool-name. Though a similar prompt for the cluster or node pool name could protect against command history re-runs.

Feature request 2: kube-aws destroy should check for errors. Right now it always acts as if it succeeded, even if it actually failed, because e.g. the cluster has node-pools, the cluster doesn't exist/was deleted already, the wrong AWS account is active, etc.

@mumoshu
Copy link
Contributor

mumoshu commented Jan 14, 2017

Sounds good overall 馃憤
Nit but for the former, I'd rather like to add -f flag to skip the confirmation for automation purpose.

@whereisaaron
Copy link
Contributor Author

Related: AWS recently added a 'termination protection' option for CloudFormation stacks. If you enable that, is should prevent a mistaken 'kube-aws destroy'. It doesn't block stack updates/rollbacks, even if those updates delete stuff, but it does stop an inadvertent, all-out destroy, just because you were in the wrong folder.

https://aws.amazon.com/about-aws/whats-new/2017/09/aws-cloudformation-provides-stack-termination-protection/

@mumoshu
Copy link
Contributor

mumoshu commented Feb 22, 2018

Opened #1152. Probably it would be the fix for this issue :)

@whereisaaron
Copy link
Contributor Author

Yes, allowing new cluster to start with stack protection is great. Though it is easy enough to enable manually after cluster start.

@zmt
Copy link

zmt commented Feb 26, 2018

Yes, allowing new cluster to start with stack protection is great. Though it is easy enough to enable manually after cluster start.

yes, but that's a manual step which can be undocumented, forgotten, or otherwise lost in the process, which undercuts the value of kube-aws as a management tool for kubernetes on aws

@mumoshu
Copy link
Contributor

mumoshu commented Feb 26, 2018

Thx for the feedback.
I agree!
I've marked it 'good first issue' to welcome a contribution from anyone.

@kiich
Copy link
Contributor

kiich commented Mar 19, 2018

+1 for this feature as i recently had to implement it via aws-cli instead, which as mentioned here is easy enough step to do but one that gets forgotten etc.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 23, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 23, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. open for pull request
Projects
None yet
Development

No branches or pull requests

6 participants