Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

terraform destroy tries to evaluate outputs that can refer to non existing resources #18026

Closed
mildred opened this issue May 11, 2018 · 23 comments · Fixed by #24083
Closed

terraform destroy tries to evaluate outputs that can refer to non existing resources #18026

mildred opened this issue May 11, 2018 · 23 comments · Fixed by #24083
Assignees
Labels
bug core v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases

Comments

@mildred
Copy link
Contributor

mildred commented May 11, 2018

Terraform Version

0.11.7

Terraform Configuration Files

during apply:

resource "null_resource" "a" {
}

output "foo" {
  value = "${list(null_resource.a.id)}"
}

during destroy:

resource "null_resource" "a" {
}

resource "null_resource" "b" {
}

output "foo" {
  value = "${list(null_resource.a.id, null_resource.b.id)}"
}

Debug Output

https://gist.github.com/mildred/af89828e68b53f996e3132f1eed26229

Output

$ terraform-0.11.7 destroy
null_resource.a: Refreshing state... (ID: 4128732092189392605)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - null_resource.a


Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes


Error: Error applying plan:

1 error(s) occurred:

* output.foo: Resource 'null_resource.b' does not have attribute 'id' for variable 'null_resource.b.id'

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

Expected Behavior

No error should have happened. Terraform should detect that null_resource.b was never created and whouldn't try to access it. Anyway, it doesn't need to generate outputs on destroy.

Actual Behavior

Terraform tries to evaluate the output referring to a non existing resource and fails on it.

Steps to Reproduce

  • Write to example.tf the first version of the file
  • terraform init
  • terraform apply
  • Write to example.tf the second version of the file
  • terraform destroy

Additional Context

This is an error that initially happened with an AWS resource but could be reproduced with null_resource.

@jbardin
Copy link
Member

jbardin commented May 11, 2018

Hi @mildred,

Thanks for filing the issue with a great example!

Outputs are still evaluated during destroy, because they can feed into other modules that may still depend on them. In this case it's obvious that it can be pruned, but there are some cases that are a little harder to detect at the moment.

There are definitely a few more changes I have planed around outputs (and locals) which will take care of this, but they have to wait until after the next major release.

Thanks!

@oliviabarrick
Copy link

Is there any way to work around this sort of issue?

@mildred
Copy link
Contributor Author

mildred commented Aug 6, 2018

Workaround is to always have a successful apply before each destroy. Or revert terraform code before destroying.

@Moeser
Copy link

Moeser commented Aug 29, 2018

Just adding an additional data point here. I'm currently running into this issue with 0.11.8 and the azurerm provider where a module output is interpolated with formatlist(). Other than those details, looks like the same bug.

@elvis2
Copy link

elvis2 commented Aug 30, 2018

@jbardin Would adding a "terraform refresh" before a validate or before apply solve the issue?

@jbardin
Copy link
Member

jbardin commented Aug 30, 2018

@elvis2, unfortunately not, because the problem is that the output gets evaluated, not that the state isn't up to date. The output node needs to be selectively pruned from the graph during destroy.

@chiefy
Copy link

chiefy commented Sep 4, 2018

We run destroy before apply in a nightly CI/CD job to assert there are no currently existing resources that weren't properly cleaned up which worked fine w/ 0.10.x we recently upgraded to 0.11.8 and started having this issue with our nightly builds. destroy after apply should work but doesn't always in our case due to proxy and AWS auth issues.

edit Also seeing this in the following scenario:
Perform a terraform destroy, midway though, process errors out due to proxy auth issue. Push errored.tfstate to s3 state backend. Re-run terraform destroy and it won't run due to missing attributes.

@willtrking
Copy link

Ran into same issue here. In our case we're using Terraform to standup an EKS Kubernetes cluster. One of the ELB's managed by EKS wasn't destroyed properly by Kubernetes, and the terraform destroy failed appropriately. When the elb was deleted, and destroy re-ran, we ran into this issue.

@john-tipper
Copy link

I get a similar error with v0.11.8:

data "template_file" "ansible_inventory" {
  template = "${ ...interpolated string that uses values from the ec2 resources in the next line (e.g. ip addresses etc)... }"
  depends_on = [ ...list of ec2 resources... ]
}

output "ansible_inventory" {
  value = "${data.template_file.ansible_inventory.rendered}"
}

When I do a destroy, I get an error:

Resource 'data.template_file.ansible_inventory' does not have attribute 'rendered' for variable 'data.template_file.ansible_inventory.rendered'

@pgporada
Copy link

pgporada commented Nov 27, 2018

I receive a similar error with Terraform v0.11.10.

data.terraform_remote_state.vpc: Refreshing state...
data.aws_ami.k8s-node: Refreshing state...
data.aws_availability_zones.available: Refreshing state...
aws_vpc.myvpc: Refreshing state... (ID: vpc-0c983f77fafa82b1f)
aws_security_group.rds: Refreshing state... (ID: sg-02fa19e31b73fea3c)

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  - destroy

Terraform will perform the following actions:

  - module.database.aws_security_group.rds

  - module.vpc.aws_vpc.myvpc


Plan: 0 to add, 0 to change, 2 to destroy.

Do you really want to destroy all resources in workspace "myvpc-us-west-1"?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

module.database.aws_security_group.rds: Destroying... (ID: sg-02fa19e31b73fea3c)
module.vpc.aws_vpc.myvpc: Destroying... (ID: vpc-0c983f77fafa82b1f)
module.database.aws_security_group.rds: Destruction complete after 1s
module.vpc.aws_vpc.ct: Destruction complete after 1s
Releasing state lock. This may take a few moments...

Error: Error applying plan:

5 error(s) occurred:

* module.database.output.instance_id: Resource 'aws_db_instance.rds' does not have attribute 'id' for variable 'aws_db_instance.rds.id'
* module.database.output.db_pass: Resource 'random_string.password' does not have attribute 'result' for variable 'random_string.password.result'
* module.database.output.arn: Resource 'aws_db_instance.rds' does not have attribute 'arn' for variable 'aws_db_instance.rds.arn'
* module.eks.local.kubeconfig: local.kubeconfig: Resource 'aws_eks_cluster.mycluster' does not have attribute 'endpoint' for variable 'aws_eks_cluster.mycluster.endpoint'
* module.database.output.db_user: Resource 'aws_db_instance.rds' does not have attribute 'username' for variable 'aws_db_instance.rds.username'

./module/database/

output "db_user" {
  value = "${aws_db_instance.rds.username}"
}

output "arn" {
  value = "${aws_db_instance.rds.arn}"
}

output "instance_id" {
  value = "${aws_db_instance.rds.id}"
}

output "db_pass" {
  value = "${random_string.password.result}"
}

./module/eks/

locals {
  kubeconfig = <<KUBECONFIG

apiVersion: v1
clusters:
- cluster:
    server: ${aws_eks_cluster.mycluster.endpoint}
    certificate-authority-data: ${aws_eks_cluster.mycluster.certificate_authority.0.data}
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: aws
  name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      command: aws-iam-authenticator
      args:
        - "token"
        - "-i"
        - "${var.name}-${var.region}"
      env:
        - name: AWS_PROFILE
          value: "default"
KUBECONFIG
}

@BruceFletcher
Copy link

#17655 seems to offer a workaround for this, in the form of setting TF_WARN_OUTPUT_ERRORS=1 in your environment before running terraform destroy.

In that issue there's a link to https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md#0111-november-30-2017 which suggests that this workaround will cease to work in 0.12. Terraform looks to be playing whack-a-mole with this 'Resource X does not have attribute Y for variable' failure mode, so I'm not encouraged to see they plan to remove this flag.

All credit to github user hawksight for identifying the workaround and the relevant changelog entry.

@chiefy
Copy link

chiefy commented Dec 19, 2018

@BruceFletcher thank you! been trying to work around this for a while now, thanks for crossposting the TF_WARN_OUTPUT_ERRORS=1 hack.

@JoshuaC215
Copy link

Have been wracking my brain for other workarounds, in our case we are using outputs to get underlying EKS attributes to configure kubernetes.

If the output you need to access is for scripting and is a rendered template file or other existing attribute, it looks like you can pipe input into terraform console to accomplish the same thing and get rid of the top level output causing errors.

Unfortunately we use an alternate plugin dir so I'm still stuck, but hoping this might help someone else.

@davidhesson
Copy link

@mildred I just want to mention that I had a completely successful apply prior to running destroy, and still had this problem. So this is not a full-proof workaround.

@DonBower
Copy link

export TF_WARN_OUTPUT_ERRORS=1 works for me for now.
Would love to see this persist.

@xpaulz
Copy link

xpaulz commented Jul 30, 2019

In my case, v0.11.10, the first terraform destroy began fine, but ultimately failed due to external resources having been managed outside of terraform and preventing terraform from successfully destroying everything.

Upon deleting/destroying those external resources manually, however, all subsequent terraform destroy invocations no longer worked, apparently because it never had deleted the module outputs that referenced the deleted resources..?

I was able to terraform destroy -target=... for each of the remaining resources, but the state file still contains some outputs and data source resources.

Not sure how I would clean it up if I had wanted to re-use the same state, maybe terraform state rm module.x.outputs.y for each remaining output y in each defined module x, but in my case, I'm shutting down the project, so I really only cared about terminating the concrete resources. :shrug

@hashibot hashibot added the v0.11 Issues (primarily bugs) reported against v0.11 releases label Aug 29, 2019
@TomMann
Copy link

TomMann commented Sep 5, 2019

I'm still getting this with 0.12.3 not sure why its been labeled as a 0.11 issue and closed.
Outputs are still resolved during destroy and when they include things such as arrays that no longer exist it errors

"somevar is empty tuple

The given key does not identify an element in this collection value."

@TWCurry
Copy link

TWCurry commented Sep 12, 2019

This is still an issue with 0.12.8. I get "The given key does not identify an element in this collection value." This is referencing a mapping output from a module, when the resources created by the module are destroyed (manually), recreating (using terraform deploy) or destroying (using terraform destroy) both fail, trying to resolve the output. Since I'm using 0.12.X, export TF_WARN_OUTPUT_ERRORS=1 doesn't work.
Is there any update on when this issue will be fixed, or whether there is any sort of work around for 0.12.x?

@hashibot hashibot added the v0.12 Issues (primarily bugs) reported against v0.12 releases label Nov 20, 2019
@Kevin-Molina
Copy link

Also interested in a temporary workaround or update on a fix. We just upgraded to 0.12 as well.

@omry-hay
Copy link

omry-hay commented Dec 16, 2019

Any updates on this issue?
We are facing the same issue on v0.12.13

@VincentHokie
Copy link

for v0.12.x i had to use conditionals on the outputs like

for maps

output "elasticsearchSecurityGroup" {
  value = length(aws_security_group.elasticsearchSecurityGroup) > 0 ? aws_security_group.elasticsearchSecurityGroup[0].arn : ""
}

for tuples/ lists

output "logsBucketName" {
  value = lookup(aws_s3_bucket.s3Buckets, "logsBucket", false) != false ? aws_s3_bucket.s3Buckets["logsBucket"].id : ""
}

and my destroy finally worked to the end

@Arkehlor
Copy link

Arkehlor commented Feb 7, 2020

Workaround seems to always have a value to output as a fallback, even null.

In my case I'm trying to create either a linux or windows vm and those have different configurations so using a "mode" variable and the count attribute I create the correct resource. Problem was now for the outputs and here is my workaround:

output "self_link" {
  description = "Self-link of the google compute instance template"
  value       = {
    "windows" = google_compute_instance_template.windows != [] ? google_compute_instance_template.windows[0].self_link : null
    "linux"   = google_compute_instance_template.linux != [] ? google_compute_instance_template.linux[0].self_link : null
  }
}

the map contains only one element (either with windows or linux as key) when applying and none when destroying but no errors in either case. (Still, be careful if those outputs are used elsewhere, as they now need to be able to handle a null value)

Also, the "try" function should work for newer terraform versions

@ghost
Copy link

ghost commented Apr 1, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 1, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug core v0.11 Issues (primarily bugs) reported against v0.11 releases v0.12 Issues (primarily bugs) reported against v0.12 releases
Projects
None yet
Development

Successfully merging a pull request may close this issue.