Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exclude resources from the destroy process #23547

Open
kerwanp opened this issue Dec 3, 2019 · 21 comments
Open

Exclude resources from the destroy process #23547

kerwanp opened this issue Dec 3, 2019 · 21 comments
Labels
config enhancement lifecycle v0.12 Issues (primarily bugs) reported against v0.12 releases

Comments

@kerwanp
Copy link

kerwanp commented Dec 3, 2019

Current Terraform Version

Terraform v0.12.16

Use-cases

Let's say we have in our Terraform configuration :

  • terraform-aws-modules/terraform-aws-vpc used to create a VPC with private and public subnets and all the VPC configuration around it
  • terraform-aws-modules/terraform-aws-eks used to create an EKS Cluster and some workers.
    • Using the previously created private and public subnets
  • Some kubernetes and helm configurations using terraform-providers/terraform-provider-helm and terraform-providers/terraform-provider-kubernetes
    • With LoadBalancer services using annotations to automatically create ELBs

Everything is currently running and we want to run terraform destroy.
We encounter two problems :

  1. A failure problem
    Terraform will try to destroy the VPC components such as subnets, but it's impossible because they are using internet gateways that canno't be deleted because some are used by the automatically created ELBs. The easiest way to destroy everything in the VPC in simply to delete the VPC.

  2. A performance problem
    Terraform will destroy every kubernetes resources such as namespaces, services, deployments, etc. Then, the Cluster. This is a waste of time, just destroying the cluster and the workers is enough.

Attempted Solutions

I would like to tell to Terraform if the ressource is excluded from the destroy process.

Workaround

I made two configurations :

  • Main configuration : contains all the components which needs to be destroyed
    • Ex : VPC, Cluster, RDS, DynamoDB
  • Sub configuration : contains all the components which will be destroyed (provider side) by destroying the main components
    • Ex : Kubernetes configuration, subnets, internet gateway, etc

With this pattern, we can then create a simple CLI with a destroy command :

  1. Run the command terraform destroy in the main configuration
  2. When the step 1 is successfully finished, remove the state of the sub configuration

Proposal

Adding an annotation system, the plugins could interact with the annotated resources and listening to event (refreshing state event, pre and post creation and destroy event).
This could be useful for a ton of features.

Here is an example

@IgnoreFail("creation")
@NoDestroy
resource "kubernetes_cluster_role_binding" "tiller" {
  metadata {
    name = "tiller"
  }
  role_ref {
    api_group = "rbac.authorization.k8s.io"
    kind = "ClusterRole"
    name = "cluster-admin"
  }
  subject {
    kind = "ServiceAccount"
    name = "tiller"
    namespace = "kube-system"
  }
}

@NoDestroy
The resource will be ignored in the destroyed process

@IgnoreFail
When destroying, creating, or refreshing, if it fails, it's simply ignored.
We can pass a parameter to ignore only in one of the processes

@danieldreier
Copy link
Contributor

@kerwanp thanks for proposing this. I think this has a lot of overlap with feature request: inverse targeting / exclude and prevent_destroy should let you succeed.

We're going to need to do some product and design work figuring out the right approach here, but there's clearly a need for improvements that make it easier to create resources via terraform that are then not destroyed using terraform. I don't have a timeframe for you on when we'll get to it, but this is help and is consistent with other feedback I've seen.

@danieldreier danieldreier added the v0.12 Issues (primarily bugs) reported against v0.12 releases label Dec 5, 2019
@AzySir
Copy link

AzySir commented Oct 11, 2020

Hey @danieldreier we're obviously passed 0.12 - was there any addition for this in 0.13?
If not - is it still planned?

@danieldreier
Copy link
Contributor

@AzySir no, this didn't make it into 0.13 and will also not be in the upcoming 0.14. It's a common request and the utility is clear, but designing this in a way that doesn't help people paint themselves into a corner is challenging.

@emctl
Copy link

emctl commented Oct 12, 2020

Could we label this as 0.15 to keep track?

@AzySir
Copy link

AzySir commented Oct 22, 2020

Honestly concerning that this isn't an already existing functionality....

@ache154
Copy link

ache154 commented Oct 29, 2020

I have a use case where the customer wanted to keep the resource groups in their azure environment but destroy everything else. This feature would have been useful.

To work around this I wrote a script in PowerShell that removed all the resources but kept the resource groups.

@bkarakashev
Copy link

in the meantime workaround:
https://stackoverflow.com/questions/55265203/terraform-delete-all-resources-except-one

@adelwin
Copy link

adelwin commented May 20, 2021

I have a separate usecase, perhaps not the correct way, but perhaps i can get some opinion.

in my case, aside from managing kube clusters using TF, i also manage the default sets of workloads in the cluster.
I'd like to be able to say to terraform,

destroy this cluster,
but skip the destroy process for these workloads,
go straight to destroy the cluster, the workload is inconsequential once the cluster is destroyed

Reasoning behind it is because we destroy clusters every night (cost reasons) bring it up again tomorrow morning.
Doing this saves time.
Plus, we're using AWS and i'm managing the aws-auth configmap with TF, and there's some funky sequence problems there.

@jimsnab
Copy link

jimsnab commented Aug 5, 2021

We had to abandon terraform destroy because of the lack of this feature. As mentioned it does a sometimes undesirable procedure of deleting inner resources before deleting container. For example on GKE, a cluster will be left with a google-installed component (gke-metrics-agent) and will time out before getting to the step of deleting the cluster. In this case what is desired is to delete the container because its inner resources will all go away, terraform created or otherwise.

I'd like to mark the container as '@DestroysContainedResources'.

@GaToRAiD
Copy link

We have run into a situation where this is a necessary evil as well. If there wasn't enough evidence already, I'll pile on.

Here is my situation:

  1. Need to create a "stack" per se of a set of server's.
  2. Create AMI's of the pre configured stack.
  3. Share the AMI's to another aws enclave.
  4. Be able to destroy said said stack of server's but keep the ami's intact.

However, when and if needed to destroy said group of server's I would not want to destroy the shared AMI's of this set. So this would cause me to have to leave this set of server's in place so it does not destroy the AMI's created from this TF.

Would really be nice to implement a way to mark a resource as not destroyable.

@benoitvidis
Copy link

One solution to this situation I am using is to split the stack into several terraform projects.

The dependencies between the projects can be imported using remote state data sources.

In most situations, it is possible to isolate the resources you want to keep. When done, you can just destroy all the projects but the one containing the resources you want to keep.

@tegge
Copy link

tegge commented Nov 10, 2021

Can this feature please be implemented? As we are generating AWS Accounts with terraform and Terraform is not able to delete/move accounts organization I would like to still successfully destroy the rest.

@digihunch
Copy link

We also need this feature. Our tf template uses azure provider to create aks cluster, then kuberenetes provider to create a service account with namespace. The terraform destroy command gets stuck while deleting the kubernetes modules.
I wish there was a way to just skip the module/resources that were created with kubernetes provider.

@jcomish
Copy link

jcomish commented Apr 12, 2022

Would love to see this get implemented for our Terraform-managed Kubernetes workloads. We currently need to do some hacky workarounds to get around it whenever we need to destroy resources.

@abower-digimarc
Copy link

Would love to have a terraform option to delete at a higher level. We want to delete a cluster, and all services on it, but it gets hung up deleting the security groups, and never gets around to anything else.

@FernandoMiguel
Copy link

Yet another use case.
When creating a configmap for aws-auth, it needs to depend on all other iam roles resources so that when applying, all roles are created beforehand.
But when destroying, due to the depends_on, it's the first thing to be destroyed, which now leaves kubernet and kubectl resources unable to talk to the cluster.
Being able to skip/exclude the deletion of this configmap would be great.

@dzmitry-lahoda
Copy link

my use case, my app is next:

  1. some shared seldom to change cloud stuff
  2. k8s managed cluster machines
  3. setup of in cluster stuff

all in cluster stuff is destroyed in step 1, and in cluster stuff fails to be destroyed because of k8s issues if to destroy only 2.

so i would prefer ignore nice clean up and pass destroy of step 2

@remiflament
Copy link

in the meantime workaround: https://stackoverflow.com/questions/55265203/terraform-delete-all-resources-except-one

And specially this answer if you want to destroy everything excepts the ones you want to keep. (state manipulation)
https://stackoverflow.com/a/74985739/8613429

@randall-coding
Copy link

Intuitively, this seems like it would go in the lifecycle block. In fact I had assumed that the prevent_destroy flag would act like this, rather than producing an error when terraform destroy is called.

I would suggest that instead of an error it should produce a warning on the planning phase that this resource isn't being destroyed and then proceed to destroy all other resources. Perhaps easier said than done, but that is what makes the most sense to me.

@sparr
Copy link

sparr commented Apr 25, 2024

My use case for this feature is when tf is managing its own remote state storage location (e.g. s3 bucket). I need to destroy everything else but keep that bucket (which I might later delete).

@lucasscheepers
Copy link

Any updates on this upcoming feature?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
config enhancement lifecycle v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

No branches or pull requests