Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform crashes with deposed resources #18643

teu opened this issue Aug 9, 2018 · 6 comments


None yet
4 participants
Copy link

commented Aug 9, 2018

In essence terraform complains, if I understand it correctly, about not being able to remove non-existent resource that is a dependency for another non-existent resource.

Terraform Version


Plan Output


An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  ~ update in-place
  - destroy
 <= read (data resources)

Terraform will perform the following actions:

      id:                  <computed>
      output_base64sha256: <computed>
      output_md5:          <computed>
      output_path:         "/terraform/states/platform/.terraform/modules/9cc629faae97b19da00144a87126b075/.terraform/archive_files/"
      output_sha:          <computed>
      output_size:         <computed>
      source.#:            <computed>
      source_dir:         "/terraform/states/platform/.terraform/modules/9cc629faae97b19da00144a87126b075/build/out"
      type:                "zip"

  ~ module.notify_slack.aws_lambda_function.lambda
      last_modified:       "2018-08-09T11:50:47.239+0000" => <computed>
      source_code_hash:    "X68jLZpyrn/OOO/NtBO5aTnB0XJZS9a5246lWk2b/3k=" => "MmuuXhYtpIehnhUJk6G34EeFiEZjdftMymOVawgjZUU="

  - module.influxdb.module.ecs_cluster.aws_iam_instance_profile.ecs (deposed)

  - module.influxdb.module.ecs_cluster.aws_iam_role.ecs (deposed)

  - module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs (deposed)

  - module.influxdb.module.ecs_cluster.aws_security_group.ecs (deposed)

Plan: 0 to add, 1 to change, 4 to destroy.


+ terraform apply -parallelism=5 platform-prod-eu-central-1.plan
Releasing state lock. This may take a few moments... Refreshing state...
module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs.deposed: Destroying... (ID: lech-ecs20180208115759339400000003)
module.notify_slack.aws_lambda_function.lambda: Modifying... (ID: us-east-1-dash-slack-lambda)
  last_modified:    "2018-08-09T11:50:47.239+0000" => "<computed>"
  source_code_hash: "X68jLZpyrn/OOO/NtBO5aTnB0XJZS9a5246lWk2b/3k=" => "MmuuXhYtpIehnhUJk6G34EeFiEZjdftMymOVawgjZUU="
module.notify_slack.aws_lambda_function.lambda: Still modifying... (ID: us-east-1-dash-slack-lambda, 10s elapsed)
module.notify_slack.aws_lambda_function.lambda: Still modifying... (ID: us-east-1-dash-slack-lambda, 20s elapsed)
module.notify_slack.aws_lambda_function.lambda: Modifications complete after 24s (ID: us-east-1-dash-slack-lambda)
Releasing state lock. This may take a few moments...

Error: Error applying plan:

1 error(s) occurred:

* module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs (destroy): 1 error(s) occurred:

* aws_launch_configuration.ecs (deposed #0): 1 error(s) occurred:

* aws_launch_configuration.ecs (deposed #0): ValidationError: Launch configuration name not found - Launch configuration lech-ecs20180208115759339400000003 not found
	status code: 400, request id: 403631ff-9bcb-11e8-b540-49f5cb28067c

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Expected Behavior

The deferred resource should be removed.

Actual Behavior


Steps to Reproduce

This is tricky as it happen after a failed terraform apply so I am unsure on how to produce this state.

What would be useful is to know the way that terraform calculates the state files checksum. With the ability to update the checksum, removing those deposed resources would be very easy.


This comment has been minimized.

Copy link

commented Aug 11, 2018

Hi @teu! Thanks for reporting this, and sorry for this unfortunate behavior.

I think the issue here is that Terraform expects that the only reasonable operation to do for a deposed instance is to destroy it. I think what it should ideally do is first refresh that instance during plan -- just as it would do for a non-deposed resource -- and then Terraform would get an opportunity to notice that the remote object is already deleted and not plan to delete it.

In the mean time, I think the only way to avoid this is to let Terraform be the one to delete the deposed object, rather than some other system. If the object still exists at the point Terraform tries to delete it then it should complete successfully.


This comment has been minimized.

Copy link

commented Aug 11, 2018

Hi @apparentlymart. The problem is, we have this in our production state and we are quite not sure how to get rid of it without removing infrastructure. I tried removing it with terraform state rm and tried terraform refresh, didn't work.

Any idea how to fix the state?


This comment has been minimized.

Copy link

commented Aug 13, 2018

Hi @teu!

I'm sorry I don't have a great answer here, but I do have an idea for a possible workaround:

  1. Make a note of the name of your current launch configuration (the one that isn't deposed).
  2. Run terraform state rm module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs to make Terraform "forget" all of the remote objects associated with that resource.
  3. Run terraform import module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs NAME where NAME is the current launch configuration name saved earlier, to re-import the current one back into the state again.

I will be the first to admit that this is a bothersome workaround because it involves creating a temporary state where Terraform doesn't know about the remote resource at all, and so it'll take some care to ensure that another Terraform run doesn't try to create a fresh one in the mean time.

A variant of this is possible if your environment can tolerate there temporarily being another duplicate launch configuration: do steps 1 and 2 from above and then just run terraform apply to have Terraform create a new object for module.influxdb.module.ecs_cluster.aws_launch_configuration.ecs. Once that's succeeded, delete the old one (now forgotten by Terraform) manually from the AWS console.


This comment has been minimized.

Copy link

commented Aug 14, 2018

Hi @apparentlymart,

This actually solved my problem. We are very grateful!


@teu teu closed this Aug 14, 2018


This comment has been minimized.

Copy link

commented Dec 5, 2018


I also have this problem.
In my opinion, this issue shouldn't have been closed, as it wasn't solved - only a workaround was provided.
Please re-open the issue.


This comment has been minimized.

Copy link

commented Feb 2, 2019

@vlad2 works for me here

$ terraform plan
Plan: 20 to add, 0 to change, 1 to destroy.


$ terraform state rm module.eks.aws_launch_configuration.eks
1 items removed.
Item removal successful.

$ terraform plan
Plan: 21 to add, 0 to change, 0 to destroy.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.