Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.7.0 regression: "unknown variable accessed" in destroy #7378

Closed
glasser opened this issue Jun 27, 2016 · 10 comments · Fixed by #7648
Closed

0.7.0 regression: "unknown variable accessed" in destroy #7378

glasser opened this issue Jun 27, 2016 · 10 comments · Fixed by #7648
Assignees

Comments

@glasser
Copy link
Contributor

glasser commented Jun 27, 2016

Terraform Version

I can reproduce this with today's HEAD (ebb9b7a) and with 0.7.0-rc2. The issue does not occur with 0.6.16: this is a 0.7.0 regression.

Affected Resource(s)

Seems like core.

Terraform Configuration Files

You can get this by git clone https://github.com/glasser/terraform-destroy-bug and here is the code inline:

# top.tf
module "middle" {
  source = "./middle"
  param  = "foo"
}
# middle/middle.tf
variable param {}

resource "null_resource" "n" {}

module "bottom" {
  source       = "./bottom"
  bottom_param = "${var.param}"
}
# middle/bottom/bottom.tf
variable bottom_param {}

Steps to Reproduce

  1. terraform get
  2. terraform apply
  3. terraform destroy -force

Expected Behavior

This should create and destroy the single null_resource.

Actual Behavior

The destroy -force fails with:

module.middle.null_resource.n: Refreshing state... (ID: 8515402792609950867)
null_resource.n: Destroying...
null_resource.n: Destruction complete
Error applying plan:

1 error(s) occurred:

* 1:3: unknown variable accessed: var.param in:

${var.param}

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

It does not matter which of the three modules contains the null_resource (and the issue is not specific to null_resource); it just appears that the configuration must contain at least one resource somewhere for the bug to be triggered.

Debug Output

TF_LOG=DEBUG: https://gist.github.com/glasser/9ad8b72709072ceddae2ae44111c9a94
TF_LOG=TRACE: https://gist.github.com/glasser/e830d64ec640c80b63c0614cc4123406

References

This seems similar to #7131 (fixed by @phinze and reviewed by @jen20); however, the fix there does not fix this situation. (I think we were also encountering whatever #7131 fixed in our stack, as upgrading to get that fix did reduce the number of errors of this form that we saw, but not entirely.)

@randomvariable
Copy link

Looks like it's this commit 559f017

@randomvariable
Copy link

Without any idea of what this does, this seems to 'fix' this particular issue:
https://github.com/randomvariable/terraform/commit/e14f5dbe79d47c369125882127ec77498b4b8978

Not sure what right looks like though.

@jbardin
Copy link
Member

jbardin commented Jul 6, 2016

closed via #7496

@jbardin jbardin closed this as completed Jul 6, 2016
@glasser
Copy link
Contributor Author

glasser commented Jul 7, 2016

Frustratingly, while #7496 does appear to fix the minimized reproduction I posted above, it doesn't fix the actual bug I was seeing in our actual TF setup. (I have been trying to add full end to end testing with an entirely clean test environment to our system and so have been stress testing things like creating new environments and destroying them on these prereleases.)

Will try to minimize again.

@glasser
Copy link
Contributor Author

glasser commented Jul 7, 2016

@jbardin I've pushed another commit to https://github.com/glasser/terraform-destroy-bug

Running terraform at 4c602d1 (which includes #7496), my original reproduction passes, but my newer reproduction does not pass, with a similar error:

$ terraform destroy -force
module.middle.template_file.middle: Refreshing state... (ID: 2c26b46b68ffc68ff99b453c1d30413413422d706483bfa0f98a5e886266e7ae)
module.middle.bottom.template_file.bottom: Refreshing state... (ID: d07fd213348652a6c1f60d3ef50bdc88eaa89d891b5a9aeede323f05669b227f)
template_file.middle: Destroying...
template_file.bottom: Destroying...
template_file.middle: Destruction complete
template_file.bottom: Destruction complete
Error applying plan:

1 error(s) occurred:

* 1:3: unknown variable accessed: var.param in:

${var.param}

Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.

The change I made to my reproduction was to replace the null_resource with a template_file that actually uses the var, and to add another template_file to the bottom module.

(I don't think template_file specifically is relevant here; my actual configuration uses datadog_monitor.)

Please reopen this bug, or I can file a new one.

@jbardin
Copy link
Member

jbardin commented Jul 8, 2016

Hi @glasser, thanks for the update. I'll reopen this one since it's appears to still be the same issue.

@jbardin jbardin reopened this Jul 8, 2016
phinze added a commit that referenced this issue Jul 15, 2016
The report in #7378 led us into a deep rabbit hole that turned out to
expose a bug in the graph walk implementation being used by the
`NoopTransformer`. The problem ended up being when two nodes in a single
dependency chain both reported `Noop() -> true` and needed to be
removed. This was breaking the walk and preventing the second node from
ever being visited.

Fixes #7378
phinze added a commit that referenced this issue Jul 15, 2016
dag: fix ReverseDepthFirstWalk when nodes remove themselves
@glasser
Copy link
Contributor Author

glasser commented Jul 15, 2016

Great, this fixes not only my minimized reproduction but my real use case as well!

@eyalzek
Copy link

eyalzek commented Aug 4, 2016

I'm getting this error now trying to migrate my codebase to 0.7.
It's really hard to debug since the output of TF_LOG=DEBUG shows entirely different errors, I have yet to manage to fix this...

@phinze
Copy link
Contributor

phinze commented Aug 4, 2016

Hi @eyalzek - sorry to hear that! Can you file a fresh issue with a bit more detail about your environment? Then we can dig in and hopefully get it sorted for v0.7.1.

@ghost
Copy link

ghost commented Apr 23, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@ghost ghost locked and limited conversation to collaborators Apr 23, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants