Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform detects change when there is no change due to template_file #9042

Closed
ghost opened this issue Jun 19, 2019 · 27 comments
Closed

Terraform detects change when there is no change due to template_file #9042

ghost opened this issue Jun 19, 2019 · 27 comments
Labels
bug Addresses a defect in current functionality. service/iam Issues and PRs that pertain to the iam service. service/lambda Issues and PRs that pertain to the lambda service.

Comments

@ghost
Copy link

ghost commented Jun 19, 2019

This issue was originally opened by @thtran101 as hashicorp/terraform#21789. It was migrated here as a result of the provider split. The original body of the issue is below.


I use Terraform to manage a serverless achitecture on AWS and after migrating to Terraform v0.12.2 from v011.x, I've noticed that there are "false" positive diffs detected when running plan/apply but the false positive change is not actually applied when the plan is approved. This problem revolves around the use of template file resources. It seems like there is a difference in how/when?? template files are rendered and evaluated against current state.

The following are my TF specs.

Terraform v0.12.2

  • provider.aws v2.15.0
  • provider.null v2.1.2
  • provider.template v2.1.2

I've put together as concise an example for reproducing the behavior as possible. In my example below the template file is used for a resource policy, but I have this same problem occurring on a state function definitions using template files.

resource "aws_lambda_function" "test" {
  function_name = "test-delete-me"

  filename = "code-deployments/test.zip"
  handler  = "index.handler"
  runtime  = "nodejs10.x"

  // use any existing IAM role compatible w/ lambda to reproduce error
  role = aws_iam_role.lambda_basic_execution.arn

  publish = false
  timeout = 5

  environment {

    variables = {
      a_lambda_var = "x"
    }

  }

}

resource "aws_iam_role" "test_role" {
  name = "test-delete-me-role"

assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF

}

data "template_file" "test_policy" {
  /*
    Use any policy file, doesn't need to actually consume the variable below
  */
  template = file("policies/test_policy.tpl")

  vars = {
    my_var = aws_lambda_function.test.arn
  }
}

resource "aws_iam_role_policy" "test_role_policy" {
  name = "test-policy"
  role = aws_iam_role.test_role.id

  policy = data.template_file.test_policy.rendered

}

In the above configuration file there is:

  • a lambda function which should use any existing IAM role. The function itself doesn't matter nor does the role.
  • a test IAM role
  • a template file for an IAM policy that is defined with a variable (attached below for convenience, actual content doesn't matter)
  • an inline policy to be attached to the test IAM role

When the infrastructure has been deployed and is in a steady state with no diffs detected, deploy an update to the lambda by toggling the a_lambda_var to another value like "y".

Expected Behavior:
Only 1 change is detected with terraform apply/plan for the lambda function.

Actual Behavior:
2 changes are detected/predicted in the following order:
a) aws_iam_role_policy.test_role_policy will change with its single statement being dropped
b) lambda function changes due to variable value change

Actual Approved Plan Behavior:
Only 1 modification is made to the lambda function which contradicts the plan.

I didn't experience this problem in Terraform v0.11.x or earlier versions. I've used my config for over 6 months with countless deployments. This bug may be related to open issue 21545???

test_policy.txt

Let me know if you need me to attach a test lambda package, but absolutely any package will allow you to reproduce the problem.

@github-actions github-actions bot added the needs-triage Waiting for first response or review from a maintainer. label Jun 19, 2019
@aeschright aeschright added service/lambda Issues and PRs that pertain to the lambda service. service/iam Issues and PRs that pertain to the iam service. labels Jul 3, 2019
@thtran101
Copy link

Just wanted to provide additional info.

I have refined my test cases and used the following to get expected results:
TF v0.11.14
AWS Provider v2.17.0
Template v2.1.2
These are all the latest versions of providers with the latest non 0.12.x version of TF.

As soon as I upgrade to TF v0.12.3, perform init -upgrade and then run plan, I will see additional projected changes for aws_iam_role_policy even though nothing will actually change.

Main terraform file with redacted role

Plan generated when toggling lambda var TFv0.11.14

Plan generated when toggling lambda var TFv0.12.3

Terraform will indicate there is a change or potential change to aws_iam_role_policy. This happens every time the lambda environment variable value is toggled and apply is rerun, so it's not happening just because of the first run with v0.12.3. It happens every time.

I know from the previous comments that terraform plan and execution aren't guaranteed to be equivalent, but this wasn't the previous behavior and the more noise created during the plan the more difficult it is to evaluate a plan and determine if it's acceptable and OK to implement/commit.

@ryndaniels ryndaniels added bug Addresses a defect in current functionality. and removed needs-triage Waiting for first response or review from a maintainer. labels Sep 10, 2019
@eWilliams35
Copy link

I just ran across something similar with elb listeners. (aws_elb resource). I've got a dynamic listener block, and no matter how I feed the list of listeners, it's showing a remove and an add. If this is a no-op, then it won't matter, but I'm not sure I want to run this against an active production load balancer and find out the hard way that it causes a hiccup while it's re-creating the listener!

What's strange is I have other load balancers in the same state, using same resource block, that aren't showing changes.

       - listener {
          - instance_port     = 25043 -> null
          - instance_protocol = "http" -> null
          - lb_port           = 25043 -> null
          - lb_protocol       = "http" -> null
        }
      + listener {
          + instance_port     = 25043
          + instance_protocol = "http"
          + lb_port           = 25043
          + lb_protocol       = "http"

@marisveide
Copy link

This bug kinda defies the whole notion of "plan" - because for the infra with some only 8 servers, on every change, everything shows as changed, because of those -> null line endings.
Totally impossible to spot what's actually being changed.

Anybody have any ideas how to fix this, please? 🙏

@bozerkins
Copy link

bozerkins commented Aug 21, 2020

I'm having the same issue with AWS delivery streams. I don't change the variables but this happens on every apply.

Terraform version:

> terraform version
Terraform v0.12.28
+ provider.aws v2.48.0
processors {
   type = "Lambda"

   parameters {
       parameter_name  = "LambdaArn"
       parameter_value = "..."
   }
   + parameters {
       + parameter_name  = "BufferSizeInMBs"
       + parameter_value = "3"
   }
   + parameters {
       + parameter_name  = "BufferIntervalInSeconds"
       + parameter_value = "60"
   }
}

the terraform looks like this

parameters {
    parameter_name  = "BufferSizeInMBs"
    parameter_value = var.buffer-size
}
parameters {
    parameter_name  = "BufferIntervalInSeconds"
    parameter_value = var.interval-seconds
}

@mgamsjager
Copy link

mgamsjager commented Sep 10, 2020

I got something similar with a template for Batch:

Nothing has changed yet the environmental vars get swapped around at random:

 {
                      ~ name  = "AWS_SIGNATURE_VERSION" -> "PGDATABASE"
                      ~ value = "v4" -> "des"
                    },

 {
                      ~ name  = "AWS_REGION" -> "LIQUIBASE_CONTEXT"
                      ~ value = "eu-central-1" -> "non-legacy"
                    },
{
                      + name  = "AWS_SIGNATURE_VERSION"
                      + value = "v4"
                    },

Terraform 0.12.29
AWS Provider 3.5

edit: Well I found my issue.
For anyone that cares: One of the env vars for the template was an empty string. In my opinion I should be able to pass in empty vars. If that is not allowed then give an error and not this random change I have been seeing for the last 3 days.

@h2ppy
Copy link

h2ppy commented Jan 14, 2021

Is this bug resolved I am using Terraform v0.13.5 and I'm getting the same issue. On running terraform apply/plan a change is detected even when there isn't any change. Terraform simply removes the setting and includes it again.
image

@marisveide
Copy link

@h2ppy - it's resolved in Terraform v0.14.

@h2ppy
Copy link

h2ppy commented Jan 14, 2021

@h2ppy - it's resolved in Terraform v0.14.

Hey @marisveide I just updated the terraform version and am still facing the issue.

@aaadipop
Copy link

aaadipop commented Feb 8, 2021

same here

data "template_file" "proxy" {
  template = file("./modules/lb/scripts/proxy.sh")
  vars = {
    ip = google_compute_forwarding_rule.lb.ip_address
  }
}

----- tf apply

# module.lb_internal.data.template_file.proxy will be read during apply
  # (config refers to values not yet known)
 <= data "template_file" "proxy"  {
      ~ id       = "e3943b3a560450bf78f3e4334b1cb0b9e5e5feea1486a351d99809aa53d98da0" -> (known after apply)
      ~ rendered = <<-EOT
            apt update
            apt install nginx vim -y

            cat > /etc/nginx/sites-enabled/default << 'EOF'
            server {
            	listen 80 default_server;
            	listen [::]:80 default_server;

            	root /var/www/html;

            	index index.html index.htm index.nginx-debian.html;

            	server_name _;

            	location / {
            		proxy_pass http://10.1.2.9:80; # ---> ${ip}
            	}
            }
            EOF

            systemctl reload nginx
        EOT -> (known after apply)
        # (2 unchanged attributes hidden)
    }

tf version 0.14.2

@andysteinwachs
Copy link

andysteinwachs commented Jul 21, 2021

@h2ppy I am experiencing a similar issue under Terraform 0.13.7 with aws 3.19.0 or 3.50.0.

- setting {
    - name      = "BatchSizeType" -> null
    - namespace = "aws:elasticbeanstalk:command" -> null
    - value     = "Fixed" -> null
  }
+ setting {
    + name      = "BatchSizeType"
    + namespace = "aws:elasticbeanstalk:command"
    + value     = "Fixed"
  }

I will see what happens once I've upgraded Terraform to version 0.14 and later 1.02 as I'm in the process of doing so.

@devopsrick
Copy link

devopsrick commented Jul 21, 2021

I opened a support ticket related to this after upgrading to tf14. In the end it was decided that the template_file provider was out of date, unsupported, and archived and should no longer be used as it will not work correctly with tf14+. It is a pretty simple conversion to the templatefile() function however.

Old style:

data "template_file" "original_template_file" {
  template = file("${path.module}/templates/file.name")  
 
  vars = {
    key1 = value1
    key2 = value2
  }
}
 
# usage = data.template_file.original_template_file.rendered

Converted to new style:

locals {
  new_template_var = templatefile(
    "${path.module}/templates/file.name",
    {
      key1 = value1
      key2 = value2
    }
  )
}
 
# usage = local.new_template_var

@joesome-git
Copy link

I'm also getting this behaviour with aws_iam_role inline policy, there has not been a policy change however terraform plan tries to overwrite the policy for no reason

Terraform v1.0.5
provider registry.terraform.io/hashicorp/aws v3.63.0

@AErmie
Copy link

AErmie commented Nov 11, 2021

I'm also seeing this "false positive" with terraform plan using an AWS S3 bucket as the backend.

Terraform version v1.0.10 (via the hashicorp/terraform:light Docker container). Also, I'm using the following providers:

  • hashicorp/aws v3.64.2
  • hashicorp/archive v2.2.0

The following resources are reporting "changes", even though there were no actual changes made:

  • aws_config_config_rule
  • aws_iam_policy
  • aws_iam_role
  • aws_iam_role_policy
  • aws_iam_role_policy_attachment
  • archive_file
  • aws_lambda_function
  • aws_lambda_permission

@sveerabathini
Copy link

is there any answer for this, even i am facing the same issue for IAM role

@pierresouchay
Copy link

@sveerabathini Same here, did you find a way?

@vinmorel
Copy link

vinmorel commented May 11, 2022

bump, also facing similar issue

@sveerabathini
Copy link

@pierresouchay found that we should not multiple principals together in a statement, we need block of of principals.

example: my initial code block under assume_role_policy as below:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"s3.amazonaws.com",
"lambda.amazonaws.com",
"eks.amazonaws.com",
"eks-fargate-pods.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]

changed it as below:

Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "lambda.amazonaws.com"
}
},
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "s3.amazonaws.com"
}
},
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "eks.amazonaws.com"
}
},
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "eks-fargate-pods.amazonaws.com"

that Worked for me please check

@pierresouchay
Copy link

@sveerabathini Thank you for your answer. That's interesting, it probably means there is a bug in the provider regarding the array of services most probably.

It sounds a bit like what I saw several times:

On:

"Service": [
"s3.amazonaws.com",
"lambda.amazonaws.com",
"eks.amazonaws.com",
"eks-fargate-pods.amazonaws.com"
]

Sometimes, TF was reporting changes such as:

"Service": [
"eks-fargate-pods.amazonaws.com" +
"s3.amazonaws.com",
"lambda.amazonaws.com",
"eks.amazonaws.com",
"eks-fargate-pods.amazonaws.com" -
]

=> so order not being checked properly and Service being considered as an Array instead of a Set (where order does not matters)

@wsj31013
Copy link

wsj31013 commented Jun 9, 2022

Terraform v0.14.6
+ provider registry.terraform.io/hashicorp/aws v4.16.0
+ provider registry.terraform.io/hashicorp/helm v2.0.2
+ provider registry.terraform.io/hashicorp/kubernetes v2.11.0
+ provider registry.terraform.io/hashicorp/local v2.2.3
+ provider registry.terraform.io/hashicorp/null v3.1.1
+ provider registry.terraform.io/hashicorp/random v3.2.0
+ provider registry.terraform.io/hashicorp/template v2.2.0

i am facing the same issue CloudFront (aws_cloudfront_distribution)

pierresouchay pushed a commit to pierresouchay/terraform-provider-aws that referenced this issue Jul 25, 2022
As depicted here hashicorp#9042 (comment)

It seems the order of services is not predictable, causing Terraform to think it has changes to perform

This change avoid this by sorting the order always in the same predictable order
pierresouchay pushed a commit to pierresouchay/terraform-provider-aws that referenced this issue Jul 25, 2022
As depicted here hashicorp#9042 (comment)

It seems the order of services is not predictable, causing Terraform to think it has changes to perform

This change avoid this by sorting the order always in the same predictable order
pierresouchay pushed a commit to pierresouchay/terraform-provider-aws that referenced this issue Jul 25, 2022
As depicted here hashicorp#9042 (comment)

It seems the order of services is not predictable, causing Terraform to think it has changes to perform

This change avoid this by sorting the order always in the same predictable order
@CihatDinc
Copy link

I am facing the same issue. I'm pulling the tf file my teammates are working on. Even though I didn't make any changes, when I run the terraform plan it shows the lines to be changed.

  • Sample output is below.
module.appsync.aws_appsync_resolver.this["AccountInformationResponse.country"] will be updated in-place
     ~source "aws_appsync_resolver" "this" {
         ~code = <<-EOT
               /**
                * Available AppSync utilities that you can use in your request and response handler
                */
               Import {util} from '@aws-appsync/utils';
...
...
...
  • The relevant resource code snippet for the output is below.
resource "aws_appsync_resolver" "this" {
   for_each = local.resolvers

   api_id = aws_appsync_graphql_api.this[0].id
   type = each.value.type
   field = each.value.field
   kind = lookup(each.value, "kind", null)

   #request_template = lookup(each.value, "request_template", tobool(lookup(each.value, "direct_lambda", false)) ? var.direct_lambda_request_template : "{}")
   #response_template = lookup(each.value, "response_template", tobool(lookup(each.value, "direct_lambda", false)) ? var.direct_lambda_response_template : "{}")
   request_template = lookup(each.value, "request_template", null)
   response_template = lookup(each.value, "response_template", null)
   #code = file(var.JSFile)
   code = lookup(each.value, "code", null)
  
   dynamic "runtime" {
     for_each = lookup(each.value, "response_template", null) == null ? [one] : []
     content {
       name = "APPSYNC_JS"
     runtime_version = "1.0.0"
     }
   }
   data_source = lookup(each.value, "data_source", null) != null ? aws_appsync_datasource.this[each.value.data_source].name : lookup(each.value, "data_source_arn", null)

   dynamic "pipeline_config" {
     for_each = lookup(each.value, "functions", null) != null ? [true] : []

     content {
       functions = [for k in each.value.functions :
       contains(keys(aws_appsync_function.this), k) ? aws_appsync_function.this[k].function_id : k]
     }
   }

   dynamic "caching_config" {
     for_each = lookup(each.value, "caching_keys", null) != null ? [true] : []

     content {
       caching_keys = each.value.caching_keys
       ttl = lookup(each.value, "caching_ttl", var.resolver_caching_ttl)
     }
   }

   max_batch_size = lookup(each.value, "max_batch_size", null)
}

@andrewbcoyle
Copy link

andrewbcoyle commented Apr 18, 2023

I am also facing this issue with Kinesis delivery stream as others have noted:

+ parameters {
                      + parameter_name  = "BufferIntervalInSeconds"
                      + parameter_value = "60"

This always shows as needed to change and removing it throws an error.

Terraform v1.4.3-dev
on darwin_amd64

  • provider registry.terraform.io/hashicorp/archive v2.3.0
  • provider registry.terraform.io/hashicorp/aws v4.63.0
  • provider registry.terraform.io/hashicorp/kubernetes v2.18.1

@arjunrajpal
Copy link

@hdodov
Copy link

hdodov commented Nov 23, 2023

I was having a similar issue with my Elastic Beanstalk config. I kept getting changes like:

- setting {
    - name      = "PORT" -> null
    - namespace = "aws:elasticbeanstalk:application:environment" -> null
    - value     = "3000" -> null
  }
+ setting {
    + name      = "PORT"
    + namespace = "aws:elasticbeanstalk:application:environment"
    + value     = "3000"
  }

Upon carefully looking at the diff, I noticed that one setting appeared in the "added" part, but not in the "removed" part:

+ setting {
    + name      = "HealthCheckPath"
    + namespace = "aws:elasticbeanstalk:environment:process:default"
    + value     = "/health"
  }

I took aws:elasticbeanstalk:environment:process:default from the AWS docs. However, I used the AWS CLI to describe my environment:

aws --region=eu-central-1 elasticbeanstalk describe-configuration-settings --application-name my-app --environment-name my-env

…and this setting was nowhere to be found. I had manually updated the health check path and there were only these two settings related to the health check path:

- Namespace: aws:elasticbeanstalk:application
  OptionName: Application Healthcheck URL
  Value: /testpath
- Namespace: aws:elb:healthcheck
  OptionName: Target
  ResourceName: AWSEBLoadBalancer
  Value: HTTP:80/testpath

…so I changed my Terraform config to not use this:

setting {
  namespace = "aws:elasticbeanstalk:environment:process:default"
  name      = "HealthCheckPath"
  value     = "/health"
}

…but to use this instead:

setting {
  namespace = "aws:elasticbeanstalk:application"
  name      = "Application Healthcheck URL"
  value     = "/health"
}

After I ran terraform apply:

  1. The new setting applied successfully
  2. Subsequent terraform plan runs wouldn't incorrectly show changes

Edit: One peculiar thing, however, is that whenever I have a new change anywhere in the aws_elastic_beanstalk_environment resource, I still get the bug. At least I get it when anything actually is going to change. Before, I was getting with every time I ran terraform plan, regardless of whether something changed in my config.

In my case at least, it looks like an issue with the aws_elastic_beanstalk_environment resource handling. If I modify another resource, I don't get the issue.

@alefred
Copy link

alefred commented Jan 29, 2024

Same Issue here with azurerm and tags:
Azure version: 3.89
Terraform version: 1.7.1

Plan Output

~ resource "azurerm_log_analytics_workspace" "xxx" {
        id                                      = "/subscriptions/xxxx"
        name                                    = "xxx"
      ~ tags                                    = {
          - "Application Name"      = "HUB: Connectivity resources" -> null
          + "Application_Name"      = "HUB: Connectivity resources"
          - "Functional Owner Name" = "Functional Owner Name" -> null
          + "Functional_Owner_Name" = "Functional Owner Name"
          - "Technical Owner Name"  = "Technical Owner Name" -> null
          + "Technical_Owner_Name"  = "Technical Owner Name"
        }
    }

Any workaround to solve this behavior?

@jchancellor-ms
Copy link

Same Issue here with azurerm and tags: Azure version: 3.89 Terraform version: 1.7.1

Plan Output

~ resource "azurerm_log_analytics_workspace" "xxx" {
        id                                      = "/subscriptions/xxxx"
        name                                    = "xxx"
      ~ tags                                    = {
          - "Application Name"      = "HUB: Connectivity resources" -> null
          + "Application_Name"      = "HUB: Connectivity resources"
          - "Functional Owner Name" = "Functional Owner Name" -> null
          + "Functional_Owner_Name" = "Functional Owner Name"
          - "Technical Owner Name"  = "Technical Owner Name" -> null
          + "Technical_Owner_Name"  = "Technical Owner Name"
        }
    }

Any workaround to solve this behavior?

@alefred - I believe this is happening due to your tag keys replacing spaces with underscores.

@justinretzolk
Copy link
Member

Hi everyone 👋 Thank you all for taking the time to participate in the conversation here. It appears things have drifted a bit, and this issue has become a bit of a collection point for issues with somewhat similar symptoms. Unfortunately, this is difficult for us to address in any meaningful way, since the root cause may be different across the different resources that are mentioned.

As far as the original report, the templatefile() function was introduced with Terraform 0.12 to act as a more robust replacement for the template_file data source, the provider for which is deprecated and archived.

With those things in mind, I'm going to close this issue. If you have any lingering problems that you would like to report, please open a new issue so that we can triage it appropriately.

Copy link

github-actions bot commented Apr 7, 2024

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 7, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Addresses a defect in current functionality. service/iam Issues and PRs that pertain to the iam service. service/lambda Issues and PRs that pertain to the lambda service.
Projects
None yet
Development

No branches or pull requests