New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lifecycle rules on launch config create cyclic dependency during 'destroy' #3294

Open
gposton opened this Issue Sep 21, 2015 · 10 comments

Comments

Projects
None yet
10 participants
@gposton
Contributor

gposton commented Sep 21, 2015

It seems to me that lifecycle rules should be ignored during the 'destroy' action.

I have a terraform template that consists of an ASG and launch config (among other things).

Without lifecycle rules the initial 'apply' and a subsequent 'destroy' work as expected.

However, I am unable to update the AMI in the launch config, as I get this error:

* ResourceInUse: Cannot delete launch configuration consul because it is attached to    AutoScalingGroup consul
    status code: 400, request id: [119046e5-60a0-11e5-9e64-6167bb31c650]
Error applying plan:

1 error(s) occurred:

* ResourceInUse: Cannot delete launch configuration consul because it is attached to AutoScalingGroup consul
    status code: 400, request id: [119046e5-60a0-11e5-9e64-6167bb31c650]

So I added the lifecycle rule to the launch config.

Now I can do the initial 'apply', and can also do another apply to change the AMI. Everything works as expected.... except 'destroy'. When I run a destroy, I now get the following error:

Error creating plan: 1 error(s) occurred:

* Cycle: aws_security_group.consul_elb, aws_elb.elb, aws_security_group.consul_server (destroy), terraform_remote_state.vpc (destroy), terraform_remote_state.vpc, aws_security_group.consul_server, aws_launch_configuration.consul_asg_conf, aws_autoscaling_group.consul_asg, aws_launch_configuration.consul_asg_conf (destroy), aws_iam_instance_profile.profile (destroy), aws_iam_role.role (destroy), aws_iam_role.role, aws_iam_instance_profile.profile

I can follow the documentation and add the lifecycle rule to the asg as well. This makes everything run successfully from terraform's perspective. However, this has unintended consequences.

When the lifecycle rule is not on the ASG, I can change the AMI in the launch config w/o the ASG being destroyed (thus cycling my instances).

When the lifecycle rule is added to the ASG, both the ASG and the launch config is destroyed and re-created. This cycles my instances which happens too quickly for all of our services to initialize and pass health checks.

I'd prefer the former scenario, where the ASG is not cycled. However, with that scenario (which works from an 'apply' perspective) I can not run 'destroy' without introducing a cycle

@gposton gposton changed the title from Lifecycle rules on launch config to Lifecycle rules on launch config create cyclic dependency during 'destroy' Sep 21, 2015

@dpetzold

This comment has been minimized.

Show comment
Hide comment
@dpetzold

dpetzold Sep 21, 2015

Contributor

+1

Contributor

dpetzold commented Sep 21, 2015

+1

@rbachman

This comment has been minimized.

Show comment
Hide comment
@rbachman

rbachman commented Sep 21, 2015

👍

@stack72

This comment has been minimized.

Show comment
Hide comment
@stack72

stack72 Sep 21, 2015

Contributor

@gposton are you giving your LaunchConf a name?

Contributor

stack72 commented Sep 21, 2015

@gposton are you giving your LaunchConf a name?

@gposton

This comment has been minimized.

Show comment
Hide comment
@gposton

gposton Sep 22, 2015

Contributor

@stack72 The name includes the AMI-ID, so it will be unique each time.

name          = "consul-${var.ami}"
Contributor

gposton commented Sep 22, 2015

@stack72 The name includes the AMI-ID, so it will be unique each time.

name          = "consul-${var.ami}"
@gposton

This comment has been minimized.

Show comment
Hide comment
@gposton

gposton Sep 22, 2015

Contributor

I updated the issue description... please see the last 4 paragraphs.

Contributor

gposton commented Sep 22, 2015

I updated the issue description... please see the last 4 paragraphs.

@gposton

This comment has been minimized.

Show comment
Hide comment
@gposton

gposton Sep 22, 2015

Contributor

I added a template that demonstrates this issue here: https://gist.github.com/gposton/0b51dea975d9250b6c99

Note that running the template allows you to update the AMI without cycling the instances in the ASG

export TF_VAR_aws_access_key=YOUR ACCESS KEY
export TF_VAR_aws_secret_key=YOUR SECRET KEY
export TF_VAR_ami=ami-1627ad26
terraform apply
export TF_VAR_ami=ami-15c29b25
terraform apply

However, 'destroy' introduces a cycle

aws_security_group.allow_all: Refreshing state... (ID: sg-0d62e969)
aws_route53_zone.ccointernal: Refreshing state... (ID: ZACQKLBGDNTBD)
aws_elb.elb: Refreshing state... (ID: test-elb)
aws_launch_configuration.launch_config: Refreshing state... (ID: test-launch_config-ami-1627ad26)
aws_route53_record.dns: Refreshing state... (ID: ZACQKLBGDNTBD_test.internal.com_CNAME)
aws_autoscaling_group.asg: Refreshing state... (ID: test-asg)
Error creating plan: 1 error(s) occurred:

* Cycle: aws_elb.elb, aws_autoscaling_group.asg, aws_launch_configuration.launch_config (destroy), aws_security_group.allow_all (destroy), aws_security_group.allow_all, aws_launch_configuration.launch_config
Contributor

gposton commented Sep 22, 2015

I added a template that demonstrates this issue here: https://gist.github.com/gposton/0b51dea975d9250b6c99

Note that running the template allows you to update the AMI without cycling the instances in the ASG

export TF_VAR_aws_access_key=YOUR ACCESS KEY
export TF_VAR_aws_secret_key=YOUR SECRET KEY
export TF_VAR_ami=ami-1627ad26
terraform apply
export TF_VAR_ami=ami-15c29b25
terraform apply

However, 'destroy' introduces a cycle

aws_security_group.allow_all: Refreshing state... (ID: sg-0d62e969)
aws_route53_zone.ccointernal: Refreshing state... (ID: ZACQKLBGDNTBD)
aws_elb.elb: Refreshing state... (ID: test-elb)
aws_launch_configuration.launch_config: Refreshing state... (ID: test-launch_config-ami-1627ad26)
aws_route53_record.dns: Refreshing state... (ID: ZACQKLBGDNTBD_test.internal.com_CNAME)
aws_autoscaling_group.asg: Refreshing state... (ID: test-asg)
Error creating plan: 1 error(s) occurred:

* Cycle: aws_elb.elb, aws_autoscaling_group.asg, aws_launch_configuration.launch_config (destroy), aws_security_group.allow_all (destroy), aws_security_group.allow_all, aws_launch_configuration.launch_config
@timbunce

This comment has been minimized.

Show comment
Hide comment
@timbunce

timbunce commented Oct 5, 2015

See also #2359.

@roderickrandolph

This comment has been minimized.

Show comment
Hide comment
@roderickrandolph

roderickrandolph commented Jan 12, 2016

👍

@sstarcher

This comment has been minimized.

Show comment
Hide comment
@sstarcher

sstarcher Jan 22, 2016

Anyone have a viable workaround for this?

sstarcher commented Jan 22, 2016

Anyone have a viable workaround for this?

@vancluever

This comment has been minimized.

Show comment
Hide comment
@vancluever

vancluever Feb 10, 2016

Contributor

I've been able to reproduce this using a rather complex module-based config. It's hard to paste it all here, but it works just fine during update and what not (after I effected create_before_destory across the entire infrastructure). All I had to do to get it to destroy was roll back my modules to the previous versions, and destroy worked perfectly.

Just by looking at documented behaviour, the only meaningful lifecycle config option out of the three that would have any value in a destroy operation would be prevent_destroy.

Maybe during walking dependencies on a terraform destroy operation, would there be a way to override a user-defined create_before_destory to false?

Contributor

vancluever commented Feb 10, 2016

I've been able to reproduce this using a rather complex module-based config. It's hard to paste it all here, but it works just fine during update and what not (after I effected create_before_destory across the entire infrastructure). All I had to do to get it to destroy was roll back my modules to the previous versions, and destroy worked perfectly.

Just by looking at documented behaviour, the only meaningful lifecycle config option out of the three that would have any value in a destroy operation would be prevent_destroy.

Maybe during walking dependencies on a terraform destroy operation, would there be a way to override a user-defined create_before_destory to false?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment