-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EKS Module does not allow clean destroy (dependency violation) #60
Comments
This is most likely due to anything you provisioned onto the cluster that creates ENIs (ALB ingress controller, NGINX ingress, etc.) |
So if I create the EKS cluster with terraform, but deploy an application with kubectl then terraform delete won't work? This has left me in a mess where I have to try to delete the left over resources manually. What's the point of terraform if it can't destroy all the cluster resources just because you deployed an app to EKS? |
If you deploy an app thats is just a pod on the cluster, you can safely delete the cluster with Terraform without deleting the app. If you deploy something like the AWS load balancer controller, that creates additional AWS resources *outside of Terraform's control, and therefore it has no visibility into those resources, but those resources are consuming resources created by Terraform - using the OPs error message, I would suspect this is some form of a load balancer that is utilizing the subnets of the VPC and therefore that load balancer controller *HAS to be deleted before any |
I have to wonder if Terraform should be used to create an EKS cluster. A cluster will have deployments including load balancers in many cases. Then Terraform's state is stale. If you forget to destroy a load balancer or anything else a deployment has created and run terraform destroy you get a real mess. You're faced with a long manual process of trying to find the remaining resources and delete them one by one. AWS has Resource Explorer, but it shows you all the default resources in every region which you don't want to destroy. You have to try and find your orphaned resources in a big haystack of defaults. It's practically worthless. Do you have any suggestions for how to clean up these orphaned resources? |
I think you are missing the crux of the issue - any IaC tool will face the same challenge. IaC tools will only manage those resources that they know about and are in control of, so you have to plan your workflow accordingly when bridging across different domains/tools |
aws-nuke was able to clean up the mess. aws-nuke looks like something you'll need to clean up after terraform EKS clusters. |
again, this is not specific to EKS. If I launch an EC2 instance with Terraform, where a custom program runs on that instance that launches other Ec2 instances or other AWS resources - if I run |
Terraform destroy should not leave a mess. It should be able to destroy the things it created. If it cannot do that it needs to warn you in advance. This kind of check should be part of the plan step. |
Hey all, I'm going to go ahead and mark this one as closed since there hasn't been much activity lately. I wanted to raise one option however. You can manage resources that you deploy on top of Kubernetes with Terraform too, such as the Kubernetes provider or the Helm provider. In this case, those resources would be in a Terraform state file and a |
It seems this EKS module implementation does not allow clean destroy.
I get the following on
terraform destroy
without modification of the code:NOTE: I found the following article useful in cleanup: https://aws.amazon.com/premiumsupport/knowledge-center/troubleshoot-dependency-error-delete-vpc/
The text was updated successfully, but these errors were encountered: