Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for lifecycle meta-argument in modules #27360

Open
elliott-weston-cko opened this issue Dec 24, 2020 · 81 comments
Open

Add support for lifecycle meta-argument in modules #27360

elliott-weston-cko opened this issue Dec 24, 2020 · 81 comments

Comments

@elliott-weston-cko
Copy link

Current Terraform Version

Terraform v0.14.3

Use-cases

Terraform currently only allows the lifecycle meta-argument to be used within the declaration of a resource. It would be really useful if users were able to specify lifecycle blocks in modules that can then be applicable to some/all of the resources within that module.

The main use-case I have is being able to use the ignore_changes to instruct terraform to ignore changes to resources or particular attributes of resources.

Proposal

For example, lets assume I create a terraform module to be used in AWS, and as part of that module I create a dynamodb table. DynamoDB tables (among other resources) have the ability to autoscale, the autoscaling configuration is defined by another resource. Consequently, a lifecycle block must be used to prevent the resource that creates the dynamodb table from modifying the read/write capacity.

In this scenario I currently have to choose to either to support autoscaling or to not support autoscaling, as I cannot pass define a lifecycle block with the ignore_changes argument.
Ideally, I'd like to be able to do something like this:

module "my-module" {
  source = "./my-module/"
  name = "foo-service"

  hash_key = "FooID"
  attributes = [
    {
      name = "FooID"
      type = "S"
    }
  ]
  lifecycle {
    ignore_changes = [
      aws_dynamodb_table.table.read_capacity,
      aws_dynamodb_table.table.write_capacity
    ]
  }
}

Being able to apply lifecycle blocks similarly to the way shown above, would enable me to manage the attributes of this resource outside of this module (whether that's via some automated process, or another resource/module definition), and would allow more people to use this module as it would be usable for a wider range of use-cases.

The documentation states that the lifecycle block can only support literal values, I'm unsure if my proposal would fall under that, as its referring to resources (and possibly attributes) that are created within the module itself 🤔

References

@elliott-weston-cko elliott-weston-cko added enhancement new new issue not yet triaged labels Dec 24, 2020
@pkolyvas pkolyvas added lifecycle modules and removed new new issue not yet triaged labels Jan 12, 2021
@jleloup

This comment was marked as duplicate.

@jaceklabuda

This comment was marked as duplicate.

@ibacalu

This comment was marked as duplicate.

@rjcoelho
Copy link

rjcoelho commented Apr 16, 2021

My main use case is prevent_destroy on DDB and S3, both persistent end-user data that I want to preserve against the accidental replacement of objects

@Shocktrooper
Copy link

Good addition as more and more people are starting to use modules like resources so being able to use the lifecycle block on the module level would be amazing

@chancez
Copy link

chancez commented May 5, 2021

It feels like having lifecycle blocks support dynamic configuration in general would be better than adding support for lifecycle blocks I modules. It would mean modules wouldn't need special support for this, and instead vars and custom logic could be used to set different lifecycle options on resources inside the module (ensuring you can encapsulate the logic, which the approach suggested in this ticket doesn't allow for).

@nitmatgeo

This comment was marked as off-topic.

@jbcom

This comment was marked as duplicate.

@ChristianPresley

This comment was marked as duplicate.

@rumeshbandara

This comment was marked as duplicate.

@openPablo

This comment was marked as duplicate.

@DevOpsJon

This comment was marked as duplicate.

@movergan

This comment was marked as duplicate.

@devpikachu

This comment was marked as duplicate.

@OGProgrammer
Copy link

+1 just ran into this and also shocked it's not here. If I had the time, I'd see about contributing this change. My use case is just like @jaceklabuda has but for the engine_version since I have auto update on.

module "rds" {
  source = "terraform-aws-modules/rds/aws"
  ...
  engine_version = "5.7.33"
  
  lifecycle {
    ignore_changes = [
      engine_version
    ]
  }
  ...
 }

@antonbabenko
Copy link
Contributor

antonbabenko commented Sep 1, 2021

@OGProgrammer You can set engine_version = "5.7" instead of "5.7.33" in the RDS module you are using. This will prevent it from showing a diff every time the patch version is updated. aws_db_instance docs for engine_version

@ghost
Copy link

ghost commented Sep 1, 2021

Just sharing my experience here, in case it helps :)
if you do not set the complete version (major + minor and patch number), AWS always offers the latest patch number.
That means, if the version is set to 5.7 and at the time of deployment, latest version offered by aws is 5.7.30 and that is installed, the next time you deploy the same package and if the AWS offering is 5.7.35 (like new patches published), Terraform will show a diff and and applying the changes usually leads to an outage, unless you have set scheduled maintenance windows (which prevents patch upgrades).
So, also, I think setting exact versions are better than ignoring them via lifecycle block, because it can make troubleshooting easier. It is best that patch versions used in the code are updated on a regular maintenance periods.

@aidan-mundy
Copy link

A barebones implementation of the prevent_destroy for modules should prevent destruction of the module itself (via a terraform destroy command), not destruction of resources inside it.

Additional work to allow resource specific lifecycles within the module, or to prevent all resources in the module from being destroyed would be nice as well, but I don't see them as immediately essential.

@BHSDuncan
Copy link

In case it helps: This would also be helpful for blue/green deployments where there's a 50% chance of the primary listener having its default_action updated with the wrong target group (in the case of having two TGs). Namely in the terraform-aws-modules/alb/aws module. Using the module beats having to manage several different TF resources.

@stephenh1991
Copy link

For anyone who encounters this issue and wants to protect module resources, we were able to find a bit of a hacky but workable solution within a wrapper module using:

resource "null_resource" "prevent_destroy" {
  count = var.prevent_destroy ? 1 : 0

  depends_on = [
    module.s3_bucket ## this is the official aws s3 module
  ]

  triggers = {
    bucket_id = module.s3_bucket.s3_bucket_id
  }

  lifecycle {
    prevent_destroy = true
  }
}

So far it seems to be a 1 way flag which can't be turned off but works well to protect buckets where content recovery would be a lengthy & disruptive task.

@nlitchfield
Copy link

We also could really do with this feature. We have a reasonably extensive library of terraform modules wrapping everything from EC2 instances to application stacks. Taking the EC2 module as an example we use a data source like the example from the docs to supply a "latest" ami at build time

data "aws_ami" "example" {
  most_recent = true

  owners = ["self"]
  tags = {
    Name   = "app-server"
    Tested = "true"
  }
}

Most infrastructure is immutable so a later AMI results in a recreation of any EC2 instances sourced from the module, but some infra we'd like to use ignore_changes for the AMI like you might with a resource. This proposal would make achieving that much easier.

@gabrielmoterani

This comment was marked as off-topic.

@crw
Copy link
Contributor

crw commented Jun 7, 2023

Thanks for your interest in this issue! This is just a reminder to please avoid "+1" comments, and to use the upvote mechanism (click or add the 👍 emoji to the original post) to indicate your support for this issue. We are aware of this issue (it is one of the highest-upvoted issues, currently 4th highest upvoted) and we do not have any updates to share. Thanks again for the feedback!

@kderck

This comment was marked as duplicate.

@tmpjg

This comment was marked as duplicate.

@naomichi-y

This comment was marked as duplicate.

@framctr

This comment was marked as duplicate.

@justin-octo

This comment was marked as duplicate.

@justin-octo

This comment was marked as off-topic.

@binnythomas-corsearch

This comment was marked as off-topic.

@JamesDLD

This comment was marked as duplicate.

@jbbeal

This comment was marked as off-topic.

@ilsaloving

This comment was marked as duplicate.

@jorsmatthys
Copy link

I would like to see this feature as well. We encapsulate resources in "base-modules" at my current customer, and those modules are re-used in many terraform projects to deploy solutions to Azure. In some cases things like standard tags we apply to those resources are overwritten by policy on Azure, triggering changes in every terraform run. Though what @chancez proposes would work for us, I feel like being able to just ignore these changes at the module level when the occasional need arises would feel more natural/cleaner/easier than modifying the interface of our "base-modules" to support passing lifecycle arguments to any of the underlying resources.

@fszymanski-blvd
Copy link

In our codebase, we use a third party module to deploy our Aurora databases. For our DR environment, we use a data object to grab the most recent snapshot and pass that the the snapshot_identifier field. However, if another snapshot is created, the module will attempt to recreate the database from the new snapshot.

Since changing the snapshot id may necessitate recreating the database in some workflows, this seems like an ideal use case for passing in an ignore_changes value from the module level.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests