Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform keeps forcing new resource on unchanged container_definitions #16769

Closed
segersniels opened this issue Nov 27, 2017 · 7 comments
Closed

Comments

@segersniels
Copy link

segersniels commented Nov 27, 2017

Terraform Version

Terraform v0.11.0
+ provider.aws v1.4.0

Terraform Configuration Files

resource "aws_ecs_task_definition" "httpd" {
  family                = "foo-httpd-${var.environment}"
  container_definitions = "${file("task-definitions/foo-httpd.json.definition")}"
  task_role_arn   = "${aws_iam_role.foo.arn}"
}

resource "aws_ecs_service" "httpd" {
  name            = "foo-httpd-${var.environment}"
  cluster         = "${data.terraform_remote_state.ecs.cluster_id}"
  task_definition = "${aws_ecs_task_definition.httpd.arn}"

  placement_strategy {
    type = "spread"
    field = "instanceId"
  }

  placement_strategy {
    type = "spread"
    field = "attribute:ecs.availability-zone"
  }

  desired_count = 1
}

resource "aws_iam_role" "foo_httpd" {
  name = "${var.project}-${var.environment}-${var.region}-foo-httpd"

  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "ecs-tasks.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}
[
  {
    "command": [],
    "name": "foo-development-foo-httpd",
    "image": "httpd",
    "cpu": 10,
    "memory": 200,
    "links": [],
    "extraHosts": [
      {
        "hostname": "datadog",
        "ipAddress": "127.0.0.1"
      }
    ],
    "logConfiguration": {
      "logDriver": "awslogs",
      "options": {
        "awslogs-group": "foo-development-eu-west-1",
        "awslogs-region": "eu-west-1",
        "awslogs-stream-prefix": "foo-httpd"
      }
    },
    "environment": [
      {
        "name": "FOO_DYNAMO_ENDPOINT",
        "value": "https://dynamodb.eu-west-1.amazonaws.com"
      },
      {
        "name": "FOO_SNS_REGION",
        "value": "eu-west-1"
      },
      {
        "name": "FOO_LAMBDA_REGION",
        "value": "eu-west-1"
      },
      {
        "name": "FOO_API_PORT_INTERNAL",
        "value": "3000"
      },
      {
        "name": "FOO_API_PORT_EXTERNAL",
        "value": "443"
      },
      {
        "name": "FOO_API_PATH",
        "value": "/v1"
      },
      {
        "name": "FOO_UPDATE_INTERVAL",
        "value": "2500"
      },
      {
        "name": "FOO_STATSD_HOST",
        "value": "datadog"
      },
      {
        "name": "FOO_STATSD_PORT",
        "value": "8125"
      },
      {
        "name": "FOO_MEMORY_CACHE_TARGET_TTL",
        "value": "5"
      },
      {
        "name": "FOO_MEMORY_CACHE_CHANNEL_TTL",
        "value": "5"
      },
      {
        "name": "FOO_MEMORY_CACHE_STREAM_TTL",
        "value": "5"
      }
    ],
    "dockerLabels": {
      "flip": "flop3"
    },
    "portMappings": [
      {
        "containerPort": 3000,
        "hostPort": 0
      }
    ]
  }
]

Expected Behavior

Terraform detects no changes and plans nothing when using terraform plan.

Plan: 0 to add, 0 to change, 0 to destroy.

Actual Behavior

Terraform keeps forcing a new resource on container_definitions when the output JSON file hasn't changed and forces a new resource deleting and recreating the old one. It seems to be duplicating the container definition json file.

Terraform will perform the following actions:

  ~ aws_ecs_service.httpd
      task_definition:       "arn:aws:ecs:eu-west-1:111111111111111:task-definition/foo-httpd-development:1" => "${aws_ecs_task_definition.httpd.arn}"

-/+ aws_ecs_task_definition.httpd (new resource required)
      id:                    "foo-httpd-development" => <computed> (forces new resource)
      arn:                   "arn:aws:ecs:eu-west-1:111111111111111:task-definition/foo-httpd-development:1" => <computed>
      container_definitions: "[{\"command\":[],\"cpu\":10,\"dockerLabels\":{\"flip\":\"flop3\"},\"environment\":[{\"name\":\"FOO_HTTP_CACHE_CONTROL_CHANNEL\",\"value\":\"5\"},{\"name\":\"FOO_API_PATH\",\"value\":\"/v1\"},{\"name\":\"FOO_STATSD_HOST\",\"value\":\"datadog\"},{\"name\":\"FOO_UPDATE_INTERVAL\",\"value\":\"2500\"},{\"name\":\"FOO_LAMBDA_REGION\",\"value\":\"eu-west-1\"},{\"name\":\"FOO_SNS_REGION\",\"value\":\"eu-west-1\"},{\"name\":\"FOO_MEMORY_CACHE_STREAM_TTL\",\"value\":\"5\"},{\"name\":\"FOO_STATSD_PORT\",\"value\":\"8125\"},{\"name\":\"FOO_MEMORY_CACHE_CHANNEL_TTL\",\"value\":\"5\"},{\"name\":\"FOO_MEMORY_CACHE_TARGET_TTL\",\"value\":\"5\"},{\"name\":\"FOO_HTTP_CACHE_CONTROL_TARGET\",\"value\":\"5\"},{\"name\":\"FOO_API_PORT_EXTERNAL\",\"value\":\"443\"},{\"name\":\"FOO_HTTP_CACHE_CONTROL_STREAM\",\"value\":\"5\"},{\"name\":\"FOO_DYNAMO_ENDPOINT\",\"value\":\"https://dynamodb.eu-west-1.amazonaws.com\"},{\"name\":\"FOO_API_PORT_INTERNAL\",\"value\":\"3000\"}],\"essential\":true,\"extraHosts\":[{\"hostname\":\"datadog\",\"ipAddress\":\"127.0.0.1\"}],\"image\":\"httpd\",\"links\":[],\"logConfiguration\":{\"logDriver\":\"awslogs\",\"options\":{\"awslogs-group\":\"foo-development-eu-west-1\",\"awslogs-region\":\"eu-west-1\",\"awslogs-stream-prefix\":\"foo-httpd\"}},\"memory\":200,\"mountPoints\":[],\"name\":\"foo-development-foo-httpd\",\"portMappings\":[{\"containerPort\":3000,\"hostPort\":0,\"protocol\":\"tcp\"}],\"volumesFrom\":[]}]" => "[{\"command\":[],\"cpu\":10,\"dockerLabels\":{\"flip\":\"flop3\"},\"environment\":[{\"name\":\"FOO_DYNAMO_ENDPOINT\",\"value\":\"https://dynamodb.eu-west-1.amazonaws.com\"},{\"name\":\"FOO_SNS_REGION\",\"value\":\"eu-west-1\"},{\"name\":\"FOO_LAMBDA_REGION\",\"value\":\"eu-west-1\"},{\"name\":\"FOO_API_PORT_INTERNAL\",\"value\":\"3000\"},{\"name\":\"FOO_API_PORT_EXTERNAL\",\"value\":\"443\"},{\"name\":\"FOO_API_PATH\",\"value\":\"/v1\"},{\"name\":\"FOO_UPDATE_INTERVAL\",\"value\":\"2500\"},{\"name\":\"FOO_STATSD_HOST\",\"value\":\"datadog\"},{\"name\":\"FOO_STATSD_PORT\",\"value\":\"8125\"},{\"name\":\"FOO_MEMORY_CACHE_TARGET_TTL\",\"value\":\"5\"},{\"name\":\"FOO_MEMORY_CACHE_CHANNEL_TTL\",\"value\":\"5\"},{\"name\":\"FOO_MEMORY_CACHE_STREAM_TTL\",\"value\":\"5\"},{\"name\":\"FOO_HTTP_CACHE_CONTROL_STREAM\",\"value\":\"5\"},{\"name\":\"FOO_HTTP_CACHE_CONTROL_CHANNEL\",\"value\":\"5\"},{\"name\":\"FOO_HTTP_CACHE_CONTROL_TARGET\",\"value\":\"5\"},{\"name\":\"FOO_MEMORY_CACHE_TARGET_TTL\",\"value\":\"5\"}],\"extraHosts\":[{\"hostname\":\"datadog\",\"ipAddress\":\"127.0.0.1\"}],\"image\":\"httpd\",\"links\":[],\"logConfiguration\":{\"logDriver\":\"awslogs\",\"options\":{\"awslogs-group\":\"foo-development-eu-west-1\",\"awslogs-region\":\"eu-west-1\",\"awslogs-stream-prefix\":\"foo-httpd\"}},\"memory\":200,\"name\":\"foo-development-foo-httpd\",\"portMappings\":[{\"containerPort\":3000,\"hostPort\":0}]}]" (forces new resource)
      family:                "foo-httpd-development" => "foo-httpd-development"
      network_mode:          "" => <computed>
      revision:              "1" => <computed>
      task_role_arn:         "arn:aws:iam::111111111111111:role/foo-development-eu-west-1-foo-httpd" => "arn:aws:iam::111111111111111:role/foo-development-eu-west-1-foo-httpd"

Debug Output

2017-11-30T10:46:25.188Z [DEBUG] plugin.terraform-provider-aws_v1.4.0_x4: 2017/11/30 10:46:25 [DEBUG] Instance Diff is nil in Diff()
2017-11-30T10:46:25.189Z [DEBUG] plugin.terraform-provider-aws_v1.4.0_x4: 2017/11/30 10:46:25 [DEBUG] Canonical definitions are not equal.

Steps to Reproduce

  1. terraform plan

Important Factoids

We generate our JSON files with the template_dir resource and pass them to our ECS services / Task Definitions. This behaviour is new and did not occur with older versions, upgrading to version 0.11.0 has started showing this. Terraform also started dumping the entire JSON as text into the terraform plan output, while it used to show the random generated id. Also worth noting terraform doesn't do this for every ecs service.

@segersniels segersniels changed the title Terraform keeps forcing new resource on unchanged container_definitions (JSON file) Terraform keeps forcing new resource on unchanged container_definitions Nov 30, 2017
@segersniels
Copy link
Author

I have been able to figure out the issue. It seems like the old terraform versions didn't throw errors when duplicate environment variables were present in the container_definitions file due to human error. In version 0.11.0 terraform seems to actually recognise these duplicates and forces a recreation of the resource (without applying the duplicate key). Resulting in terraform trying to recreate the resource every time a plan is issued.

@ghost
Copy link

ghost commented Dec 6, 2017

I'm still experiencing this, new in terraform 0.11.1 (from 0.10.8) despite having no duplicate environment variables or other attributes.

Terraform is incorrectly detecting the addition by AWS of a field with a value of "null" as a change. Even when letting terraform apply and then copying and pasting the container definitions from the task definition in the aws console, I can't reliably get terraform to detect no changes.

@segersniels
Copy link
Author

What helped me determine the issue was running terraform plan with TF_LOG=debug. This will show you a First and Second container definition JSON in the debug log. What I did was I converted the two JSONs to separate files and compared the two with a tool called json-diff.

This might help you determine a difference in your container definitions that you possibly oversaw.

@spangaer
Copy link

spangaer commented Jan 4, 2018

I faced a similar problem.
I Just got the container definitions from the state replaced \" with ", formatted and replaced those elements that should be parameters and that worked as expected -> detected the no change.

Seems that what is stored in the state is what AWS makes of the container definitions.
I thought the sorting might have to do with this, but further investigation proved that wrong. In the end it turned out it was adding unspecified parameters to the container definition.

In my case in the port mapping

	"portMappings": [{
		"containerPort": ${port}
	}],

Became

	"portMappings": [{
		"containerPort": 40000,
		"hostPort": 40000,
		"protocol": "tcp"
	}],

I'm using awsvpc in which case both ports must match, I know now.

@theladyjaye
Copy link

theladyjaye commented Apr 13, 2018

@spangaer Thanks for pointing out the portMappings upon further reading on the AWS ECS Task Definitions I ran across this:

For task definitions that use the awsvpc network mode, you should only specify the containerPort. The hostPort can be left blank or it must be the same value as the containerPort.

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html

I figured only the first sentence applied, hostPort can be left blank.

Looking at what Terraform outputs in the template though:

[{\"containerPort\":8000,\"hostPort\":8000,\"protocol\":\"tcp\"}]

It's filling it all in per

or it must be the same value as the containerPort.

Once I amended my task definition JSON, as you have done:

"portMappings": [
            {
                "containerPort": 8000,
                "hostPort": 8000,
                "protocol": "tcp"
            }
        ],

It no longer wants to recreate the resource. Thank you for finding that!

@fabian-dev
Copy link

fabian-dev commented Jul 31, 2018

The issue with a missing hostPort forces changes of the task definition is addressed over here hashicorp/terraform-provider-aws#3401

@ghost
Copy link

ghost commented Apr 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@hashicorp hashicorp locked and limited conversation to collaborators Apr 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants