Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

aws_ecs_task_definition overwrites previous revision #258

Closed
hashibot opened this issue Jun 13, 2017 · 30 comments · Fixed by #22269
Closed

aws_ecs_task_definition overwrites previous revision #258

hashibot opened this issue Jun 13, 2017 · 30 comments · Fixed by #22269
Assignees
Labels
bug Addresses a defect in current functionality. service/ecs Issues and PRs that pertain to the ecs service.
Milestone

Comments

@hashibot
Copy link

This issue was originally opened by @dimahavrylevych as hashicorp/terraform#8740. It was migrated here as part of the provider split. The original body of the issue is below.


Hello community,

I faced an issue while working with aws_ecs_task_definition.
I have a script:

data "template_file" "task_definition" {
  template = "${file("task-definitions/service.json")}"
  vars {
    SERVICE_NAME = "test"
    IMAGE_URL = "test:latest"
  }
}

resource "aws_ecs_task_definition" "test" {
    family = "test"
    network_mode = "bridge"
    container_definitions = "${data.template_file.task_definition.rendered}"
}

resource "aws_ecs_service" "test" {
  name = "test"
  cluster = "test"
  task_definition = "${aws_ecs_task_definition.test.arn}"
  desired_count = 2
  deployment_minimum_healthy_percent = 100
}

Im trying to running: terraform plan so the part of output looks like:

-/+ ...
...
 revision:              "24" => "<computed>"
...

While running terraform apply and loging to AWS I see that the new revision has created but the previous one dissapeared.

Question:

Is is possible to implement a flag that will allow me to save previous revisions?

@hashibot hashibot added the bug Addresses a defect in current functionality. label Jun 13, 2017
@sychevsky
Copy link

This is expected behavior - i use some code.

data "aws_ecs_task_definition" "my-service" {
  task_definition = "${aws_ecs_task_definition.my-service.family}"
}

resource "aws_ecs_task_definition" "my-service" {
  family                = "${var.environment_name}-${var.service_name}-${var.instance_name}"
  network_mode          = "bridge"
  container_definitions = "${data.template_file.my-service.rendered}"
}

resource "aws_ecs_service" "my-service" {
 ...
  #Track the latest ACTIVE revision
  task_definition = "${aws_ecs_task_definition.my-services.family}:${max("${aws_ecs_task_definition.my-service.revision}", "${data.aws_ecs_task_definition.my-service.revision}")}"
...
}

this example worked with Terraform v0.9.2 but not worked with Terraform 0.9.11.- may be bug in newst version of tf

@adamgotterer
Copy link

I dealt with it by adding a lifecycle ignore to the task definition and service:

resource "aws_ecs_task_definition" "task" {
  ...

  lifecycle = {
    ignore_changes = ["*"]
  }
}

resource "aws_ecs_service" "test" {
  ...

  lifecycle = {
    ignore_changes = ["task_definition"]
  }
}

@radeksimko radeksimko added the service/ecs Issues and PRs that pertain to the ecs service. label Jan 25, 2018
@dev-head
Copy link

+1 We hope to see a solution to this issue soon, thanks Hashi for the new tag.... here's to hoping this is moving along. @adamgotterer work around is viable, so long as you are able to manually enable and disable those ignore changes attributes.

@alovak
Copy link

alovak commented Mar 27, 2018

from aws cli about --task-definition:

The family and revision (family:revision ) or full ARN of the task definition to run in your service. If a revision is not specified, the latest ACTIVE revision is used.

Just use family only. It's still doesn't solve issue with showing changes like: task_definition: "api:21" => "api", but at least it will not break anything.

@moali87
Copy link

moali87 commented Apr 6, 2018

+1 We shouldn't need to ignore all changes on the task_definition resource, only on the service.

@binarymatt
Copy link

It would be very useful to have a flag that would not deregister task definitions when a new one is created. In our case, being able to rollback a service to a previous version in case of bugs is something we'd like to have available.

@brandonawells86
Copy link

I've been running into this issue for a while and I used lifecycle as bandaid solution. It would be nice to have a more solid solution.

@mroshan1
Copy link

mroshan1 commented Jun 4, 2018

Using the lifecycle still seem to destroy the old task definition, not sure how you all are using it as workaround for the overwrite issue. any help would be appreciated

@Geethree
Copy link

Old task revisions are marked as inactive and can be re activated if needed...

@maxrothman
Copy link

@Geethree as per the AWS docs, inactive task definitions can't be reactivated, and can only be relied on to continue existing as long as running tasks reference them.

@LiborVilimekMassive
Copy link

Hi guys, just want to share my solution - I just remove it from state after creation as I dont need Terraform to manage it anymore (its in revision and thats it).

terraform state rm aws_ecs_task_definition.this

So next time new revision is created and the old one remains.

@braybaut
Copy link

braybaut commented Mar 4, 2019

@LiborVilimekMassive how this work ? because when I applied the state rm I must import the task definition that is marked as active or terraform must to create the task definition.

@LiborVilimekMassive
Copy link

LiborVilimekMassive commented Mar 4, 2019

@braybaut - the rm does not remove resource, it does stop tracking the resource (=removing from its state).

So in terraform scripts I have

resource "aws_ecs_task_definition" "this" {
  //some resources
}

Then I basically call these two commands

terraform apply -auto-approve
terraform state rm aws_ecs_task_definition.this

Next time these scripts are executed (and something has changed in task definition), the terraform does not know about the previous task definition (as it is not in its state) and therefore creating new version instead and dont delete old version.

I suppose that you can even do the other way around - remove it from state before apply and it would work as well.

@braybaut
Copy link

braybaut commented Mar 4, 2019

@LiborVilimekMassive yes i agree with this, but this is my issue:

I have task defitinion resource and service resource, this is my service resource:
resource "aws_ecs_service" "service" {
count = "${1 - var.create_elb}"
name = "service_${var.micro_service_name}"
cluster = "${var.cluster_id}"
task_definition = "${aws_ecs_task_definition.task_definition.arn}"
desired_count = "${var.desired_count}"
lifecycle {
¦ create_before_destroy = true
¦ ignore_changes = ["task_definition", "deployment_minimum_healthy_percent", "desired_count"]
}
}
`
This ignore the task definition and this work, but when i need upgrade the service with a new revision and them run terraform apply i see that terrafom want create a new task definition, i want ignore this.

`An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:

  • create

Terraform will perform the following actions:

  • module.wealth-roboadvisor-datalakereport.aws_ecs_task_definition.task_definition
    id:
    arn:
    network_mode: "host"
    revision:

Plan: 1 to add, 0 to change, 0 to destroy.

if I try to remove the resource from state, terraform must create the resource again :c :c

tomelliff added a commit to tomelliff/terraform-provider-aws that referenced this issue Jan 7, 2020
Task definition revisions are immutable so Terraform is unable to just update this resource and instead needs to delete the old revision and create a new one.
This isn't helpful if you still want access to the old task definition such as if you are using another system to manage rollbacks etc as you are unable to roll a service back to an inactive task definition.

Closes hashicorp#258
@reedflinch
Copy link

reedflinch commented Jul 8, 2020

Agree with @LiborVilimekMassive's solution being the closest we seem to get to the ideal state. However, with terraform state rm we are losing out on the diff between changes in task definition.

Ideally, as @binarydud said, we just don't want Terraform to deregister our old task definitions while still showing changes between old and new.

EDIT:

For those following, we've found a decent workaround

# we can still get the task definition diff at this point, which we care about
terraform plan

# remove from state so that task definition is not destroyed, and we're able to rollback in the future if needed
terraform state rm aws_ecs_task_definition.main

# diff will show a brand new task definition created, but that's ok because we got the diff in step 1
terraform apply

@rimiti
Copy link

rimiti commented Nov 23, 2020

aws_ecs_task_definition

Is it working for someone ? It is not for me.

provider "aws" {
 ...
  version = "~> 2.13"
}

Terraform v0.12.29

@FrederikNygaardSvendsen

Agree with @LiborVilimekMassive's solution being the closest we seem to get to the ideal state. However, with terraform state rm we are losing out on the diff between changes in task definition.

Ideally, as @binarydud said, we just don't want Terraform to deregister our old task definitions while still showing changes between old and new.

EDIT:

For those following, we've found a decent workaround

# we can still get the task definition diff at this point, which we care about
terraform plan

# remove from state so that task definition is not destroyed, and we're able to rollback in the future if needed
terraform state rm aws_ecs_task_definition.main

# diff will show a brand new task definition created, but that's ok because we got the diff in step 1
terraform apply

My workaround was to remove the resource from the state file after apply (not after plan) - this is way better as it actually shows the diff, thanks for sharing!

@ghost
Copy link

ghost commented Feb 22, 2021

@FrederikNygaardSvendsen doesn't it mean that you end up in exactly same situation next time you run plan?

  1. plan -> apply -> remove from state
  2. plan against state that doesn't have resource -> plan will show addition of new resource rather than modifications

@csbodine
Copy link

I'm curious why this is still an issue. aws_launch_template does the right thing and the old templates are retained while only adding new templates if Terraform detects a change. Why can't the same mechanism exist for Task Definitions?

@plinioh
Copy link

plinioh commented Jun 21, 2021

Any plans on revisiting this issue?

@ArneRiemann4711
Copy link

Would also love to see some news ;)

@olegflo
Copy link

olegflo commented Sep 13, 2021

Bumping up :) seems like the community wants it 🙏

@TarekAS
Copy link

TarekAS commented Sep 22, 2021

One liner if you have multiple task-definitions that you want to remove from the state after the apply:

terraform state rm $(terraform state list | grep aws_ecs_task_definition)

@breathingdust
Copy link
Member

Hi all 👋 Just letting you know that this is issue is featured on this quarters roadmap. If a PR exists to close the issue a maintainer will review and either make changes directly, or work with the original author to get the contribution merged. If you have written a PR to resolve the issue please ensure the "Allow edits from maintainers" box is checked. Thanks for your patience and we are looking forward to getting this merged soon!

@breathingdust breathingdust added this to the Roadmap milestone Nov 10, 2021
@YakDriver
Copy link
Member

YakDriver commented Dec 17, 2021

@github-actions
Copy link

This functionality has been released in v3.72.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!

@d10i
Copy link

d10i commented Jan 25, 2022

Is it just me or this isn't actually fixed? It still tries to destroy when I change container definitions, even with skip_destroy = true.

Tried with this:

resource "aws_ecs_service" "test" {
  name = "test"
  cluster = "test"
  task_definition = aws_ecs_task_definition.test.arn
}

resource "aws_ecs_task_definition" "test" {
    family = "test"
    network_mode = "bridge"
    container_definitions = jsonencode(["..."])

    skip_destroy = true
}

Using version 3.73.0.

@d10i
Copy link

d10i commented Jan 25, 2022

Ignore me, it's just that the Terraform plan that is confusing. It looks like it deletes the task definition from AWS while actually it only removes the old one from the Terraform state and keeps the one in AWS.

@hera1002
Copy link

Still the task definition gets deregistered

@github-actions
Copy link

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 12, 2022
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.