Join GitHub today
aws_volume_attachment workflow with skip_destroy #1017
Firstly, I feel that aws_volume_attachment should always have skip_destroy=true as default. aws_volume_attachments are notoriously tricky in terraform, because they often prevent destruction of resources.
For example, terraform up an aws_instance, an aws_ebs_volume, and an aws_volume_attachment that connects them (without skip_destroy), and then try and plan -destroy all 3 (and apply)
Terraform will simply do this:
Worse still, the skip_destroy flag must be successfully APPLIED to the resource IN THE STATEFILE, before you have any hope of terraform destroying it. If the skip_destroy="true" flag is merely on the aws_volume_attachment resource .tf file, and you try and destroy the resource, you still get the above timeout error. - this means the docs are technically wrong.
Sometimes, you destroy resources simply by deleting the
One of three things should happen (in my view):
Unfortunately there is no lifecycle event to swap the order of destruction of dependencies for (3).
Any comments appreciated - its a bit of a thorny workflow at the moment.
referenced this issue
Oct 21, 2017
I can't confirm this behaviour with terraform 0.10.8 and terraform-aws 1.1.0.
I'm currently trying to figure out how to deal with this, I think the problem with your plan, and the problem that OP missed in his workflow, is the timeout commonly occurs when you try to destroy an attachment while it is still mounted. If you unmount the volume first, then run
I just started with Terraform and I "solved" the problem like so:
The problem I'm having however, is I think I'm running into hashicorp/terraform#16237, where the destroy provisioner causes a cycle.
I can solve the cycle by either 1. hard coding the instance's IP or 2. Adding in a
I thought there might be prior work on this, but it seems everyone is using
Also running into this issue. In our case, we're trying to use the remote provisioner to stop services and unmount an EBS volume "cleanly", this failed due to running into cycle issues exactly as @nemosupremo stated.
Our workaround is to do a dirty detachment using