-
-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reopen #192 #213
reopen #192 #213
Conversation
These changes were released in v1.10.0. |
/terratest |
a8f8eaa
to
f16cb42
Compare
Hi @osterman, I found the issue and did another push. The random_pet did not handle the enabled == false condition. |
/terratest |
/terratest |
It looks like the tests are now failing on disabled (enabled = false) should not create any resources. I believe this should be fixed by resource "random_pet" "..." {
count = local.enabled ? 1 : 0
...
} note you then need to update references to the random pet to use an array. such as one(random_pet.instance[*].keepers.instance_class) |
Hi @Benbentwo , that makes sense and better than the hack I had in there. I just pushed a fix for this. |
@Benbentwo @osterman this workflow is awaiting approval: https://github.com/cloudposse/terraform-aws-rds-cluster/actions/runs/9372514860 |
/terratest |
/terratest |
Hey everyone, with the upgrade to 1.10.0, terraform would destroy and recreate all instances for all our clusters, since the Is this intentional? I understand this PR will prevent outages since the instance now has |
Based off @finchr's comment in #192 (comment) it should only replace the DB Instances. I ran this update on one of our test envs and what I see that happens is:
It looked a bit scary, as the AWS console stated it was deleting the "Writer Instance" without having the new one promoted to become a Writer first so it appears this could potentially cause a brief downtime (in regards to writing). |
@syphernl @Benbentwo agree, while this isn't a breaking change, some sort of But running this fully hands-off in our dev environment we did not see an interruption in our application's connection to db. So good work @finchr! |
Hi. @morremeyer the intent was to be able to update a cluster with near zero downtime. We ran into several scenarios where terraform was doing that anyway without the benefit of create_before_destroy. Plus this is a lot faster that in-place updates of existing nodes. |
what
I implemented create_before_destroy on the aws_rds_cluster_instance default instances.
Originally in #192 but that was closed for reasons we won't go into here.
why
Making a change to any parameter that triggers a replace on a aws_rds_cluster_instance results in all instances being destroyed before attempting to create a new instance which causes an outage. This a faster (and safer) altenative to #191
references
This closes #190 and is an alternative to #191