Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

eksctl (GitTag:"0.1.33") unable to delete cluster (Error: waiting for CloudFormation stack) #832

Closed
ktamiola opened this issue Jun 2, 2019 · 6 comments

Comments

@ktamiola
Copy link

commented Jun 2, 2019

What happened?
eksctl is unable to finalize cluster deletion

What you expected to happen?
I would expect the deletion to finish without errors.

How to reproduce it?
Create a demo cluster of any size and try to delete it with [ℹ] version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.1.33"}

Anything else we need to know?
I am using OSX: Darwin 17.7.0 Darwin Kernel Version 17.7.0

Versions
Please paste in the output of these commands:

[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.1.33"}
Darwin 17.7.0 Darwin Kernel Version 17.7.0
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T18:56:40Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}

Logs

[✖]  waiting for CloudFormation stack "eksctl-demo-nodegroup-standard-workers" to reach "DELETE_COMPLETE" status: ResourceNotReady: failed waiting for successful resource state
[✖]  failed to delete cluster with nodegroup(s)

@ktamiola ktamiola added the kind/bug label Jun 2, 2019

@martina-if

This comment has been minimized.

Copy link
Member

commented Jun 3, 2019

Hi @ktamiola , thank you for the report. I can't reproduce this issue. Can you give us more information about what the cluster looked like? did you specify a VPC? subnets?

@ktamiola

This comment has been minimized.

Copy link
Author

commented Jun 3, 2019

Thank you so much for a swift response @martina-if . The issue was likely caused by an AWS Ingress deployment. I am preparing server config data for you guys.

@martina-if

This comment has been minimized.

Copy link
Member

commented Jun 4, 2019

I see, if you had some ALB's I think this is a known issue (#536). When that happens I remove them manually and then try deleting them again.

@errordeveloper

This comment has been minimized.

Copy link
Member

commented Jun 4, 2019

@ktamiola broadly speaking, we have a number of deletion issue, which are to do with the fact that we don't have a way to track ad-hoc resources that get created by various things in the cluster. As Martina mentioned, we are tracking these under #536, but also #103 is the umbrella issue. I hope you don't mind if I close this, feel free to re-open if you thing this is a unique case.

@ktamiola

This comment has been minimized.

Copy link
Author

commented Jun 4, 2019

@errordeveloper

This comment has been minimized.

Copy link
Member

commented Jun 4, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.