New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terraform detects change when there is no change due to template_file #9042
Comments
Just wanted to provide additional info. I have refined my test cases and used the following to get expected results: As soon as I upgrade to TF v0.12.3, perform init -upgrade and then run plan, I will see additional projected changes for aws_iam_role_policy even though nothing will actually change. Main terraform file with redacted role Plan generated when toggling lambda var TFv0.11.14 Plan generated when toggling lambda var TFv0.12.3 Terraform will indicate there is a change or potential change to aws_iam_role_policy. This happens every time the lambda environment variable value is toggled and apply is rerun, so it's not happening just because of the first run with v0.12.3. It happens every time. I know from the previous comments that terraform plan and execution aren't guaranteed to be equivalent, but this wasn't the previous behavior and the more noise created during the plan the more difficult it is to evaluate a plan and determine if it's acceptable and OK to implement/commit. |
I just ran across something similar with elb listeners. (aws_elb resource). I've got a dynamic listener block, and no matter how I feed the list of listeners, it's showing a remove and an add. If this is a no-op, then it won't matter, but I'm not sure I want to run this against an active production load balancer and find out the hard way that it causes a hiccup while it's re-creating the listener! What's strange is I have other load balancers in the same state, using same resource block, that aren't showing changes.
|
This bug kinda defies the whole notion of "plan" - because for the infra with some only 8 servers, on every change, everything shows as changed, because of those Anybody have any ideas how to fix this, please? 🙏 |
I'm having the same issue with AWS delivery streams. I don't change the variables but this happens on every apply. Terraform version: > terraform version
Terraform v0.12.28
+ provider.aws v2.48.0
the terraform looks like this
|
I got something similar with a template for Batch: Nothing has changed yet the environmental vars get swapped around at random:
Terraform 0.12.29 edit: Well I found my issue. |
@h2ppy - it's resolved in Terraform v0.14. |
Hey @marisveide I just updated the terraform version and am still facing the issue. |
same here
tf version 0.14.2 |
@h2ppy I am experiencing a similar issue under Terraform 0.13.7 with aws 3.19.0 or 3.50.0.
I will see what happens once I've upgraded Terraform to version 0.14 and later 1.02 as I'm in the process of doing so. |
I opened a support ticket related to this after upgrading to tf14. In the end it was decided that the template_file provider was out of date, unsupported, and archived and should no longer be used as it will not work correctly with tf14+. It is a pretty simple conversion to the templatefile() function however. Old style:
Converted to new style:
|
I'm also getting this behaviour with
|
I'm also seeing this "false positive" with Terraform version v1.0.10 (via the hashicorp/terraform:light Docker container). Also, I'm using the following providers:
The following resources are reporting "changes", even though there were no actual changes made:
|
is there any answer for this, even i am facing the same issue for IAM role |
@sveerabathini Same here, did you find a way? |
bump, also facing similar issue |
@pierresouchay found that we should not multiple principals together in a statement, we need block of of principals. example: my initial code block under assume_role_policy as below: changed it as below: Version = "2012-10-17" that Worked for me please check |
@sveerabathini Thank you for your answer. That's interesting, it probably means there is a bug in the provider regarding the array of services most probably. It sounds a bit like what I saw several times: On:
Sometimes, TF was reporting changes such as:
=> so order not being checked properly and Service being considered as an Array instead of a Set (where order does not matters) |
i am facing the same issue CloudFront (aws_cloudfront_distribution) |
As depicted here hashicorp#9042 (comment) It seems the order of services is not predictable, causing Terraform to think it has changes to perform This change avoid this by sorting the order always in the same predictable order
As depicted here hashicorp#9042 (comment) It seems the order of services is not predictable, causing Terraform to think it has changes to perform This change avoid this by sorting the order always in the same predictable order
As depicted here hashicorp#9042 (comment) It seems the order of services is not predictable, causing Terraform to think it has changes to perform This change avoid this by sorting the order always in the same predictable order
I am facing the same issue. I'm pulling the tf file my teammates are working on. Even though I didn't make any changes, when I run the terraform plan it shows the lines to be changed.
|
I am also facing this issue with Kinesis delivery stream as others have noted:
This always shows as needed to change and removing it throws an error. Terraform v1.4.3-dev
|
It could be related to this: https://discuss.hashicorp.com/t/trailing-new-line-in-key-vault-after-using-heredoc-syntax/14561 |
I was having a similar issue with my Elastic Beanstalk config. I kept getting changes like:
Upon carefully looking at the diff, I noticed that one setting appeared in the "added" part, but not in the "removed" part:
I took aws --region=eu-central-1 elasticbeanstalk describe-configuration-settings --application-name my-app --environment-name my-env …and this setting was nowhere to be found. I had manually updated the health check path and there were only these two settings related to the health check path: - Namespace: aws:elasticbeanstalk:application
OptionName: Application Healthcheck URL
Value: /testpath
- Namespace: aws:elb:healthcheck
OptionName: Target
ResourceName: AWSEBLoadBalancer
Value: HTTP:80/testpath …so I changed my Terraform config to not use this: setting {
namespace = "aws:elasticbeanstalk:environment:process:default"
name = "HealthCheckPath"
value = "/health"
} …but to use this instead: setting {
namespace = "aws:elasticbeanstalk:application"
name = "Application Healthcheck URL"
value = "/health"
} After I ran
Edit: One peculiar thing, however, is that whenever I have a new change anywhere in the In my case at least, it looks like an issue with the |
Same Issue here with azurerm and tags: Plan Output
Any workaround to solve this behavior? |
@alefred - I believe this is happening due to your tag keys replacing spaces with underscores. |
Hi everyone 👋 Thank you all for taking the time to participate in the conversation here. It appears things have drifted a bit, and this issue has become a bit of a collection point for issues with somewhat similar symptoms. Unfortunately, this is difficult for us to address in any meaningful way, since the root cause may be different across the different resources that are mentioned. As far as the original report, the With those things in mind, I'm going to close this issue. If you have any lingering problems that you would like to report, please open a new issue so that we can triage it appropriately. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. |
This issue was originally opened by @thtran101 as hashicorp/terraform#21789. It was migrated here as a result of the provider split. The original body of the issue is below.
I use Terraform to manage a serverless achitecture on AWS and after migrating to Terraform v0.12.2 from v011.x, I've noticed that there are "false" positive diffs detected when running plan/apply but the false positive change is not actually applied when the plan is approved. This problem revolves around the use of template file resources. It seems like there is a difference in how/when?? template files are rendered and evaluated against current state.
The following are my TF specs.
Terraform v0.12.2
I've put together as concise an example for reproducing the behavior as possible. In my example below the template file is used for a resource policy, but I have this same problem occurring on a state function definitions using template files.
In the above configuration file there is:
When the infrastructure has been deployed and is in a steady state with no diffs detected, deploy an update to the lambda by toggling the a_lambda_var to another value like "y".
Expected Behavior:
Only 1 change is detected with terraform apply/plan for the lambda function.
Actual Behavior:
2 changes are detected/predicted in the following order:
a) aws_iam_role_policy.test_role_policy will change with its single statement being dropped
b) lambda function changes due to variable value change
Actual Approved Plan Behavior:
Only 1 modification is made to the lambda function which contradicts the plan.
I didn't experience this problem in Terraform v0.11.x or earlier versions. I've used my config for over 6 months with countless deployments. This bug may be related to open issue 21545???
test_policy.txt
Let me know if you need me to attach a test lambda package, but absolutely any package will allow you to reproduce the problem.
The text was updated successfully, but these errors were encountered: