-
Notifications
You must be signed in to change notification settings - Fork 9.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RDS instance: Terraform insisting on destroy-recreate RDS instances although their IDs are unchanged #16724
Comments
Hi @dohoangkhiem, Can you share the config that you have which is causing this? |
You're right! I also found that
and here we have So, seems Terraform really considers no Do you think no |
I have a feeling that the aws provider doesn't have a default value for However in this case, I still think the issue is that |
@jbardin thanks for your response Below is our configuration for the
where
While I agree that unset and "" would show up as a diff, I guess Terraform does not just base on the configuration diff, but also the actual calculated value, right? In term of value, unset and "" for AZ should be the same, I think. |
I think we made our case a little bit clearer, actually we already had the expression for
As we checked with our old implementation, we used Terraform 0.9.8 and above expression didn't cause any changes in Terraform plan. Then we upgraded to Terraform 0.10.7 to leverage the workspace, now with the same empty value of |
I ran into this problem today where tf kept wanting to recreate the rds cluster because the id value was generated. upgrading to 0.11.0 seems to have helped. |
@brandon-dacrib current aws provider version is 1.3.1, did you force the use of v0.1.4 to get around this issue? 0.1.4 seems.. ancient. https://github.com/terraform-providers/terraform-provider-aws/releases |
No. That is what I had installed on my laptop. How do I force an upgrade? |
I've had this issue as well, both in 10.8, 11.0, and now 11.1 with AWS provider 1.2.0-1.5.0 The problem is that TF seems to get confused about the AZs the instances are using. Just now, it said it spun up databases in |
+1 facing the same issue - TF keeps to re-create rds instance. $ terraform version
Terraform v0.11.1
$ terraform providers
.
├── provider.aws >= 1.5.0
├── provider.terraform |
+1 facing the same issue, also seems to be AZ related. |
RDS Aurora seems to use all availability zones even if not requested. This causes the state file to be out of sync and forces a new resource. Example, only two availability zones requested:
Apply works as expected. After no changes, another apply generates this:
If the script is changed to:
Then no action is required on apply. |
Are there any updates on this? The comment by @MMarulla reflects exactly the issue I have. I also think RDS Aurora uses all availability zones by default, and it ignores whatever is set. In my configuration I have: availability_zones = ["eu-west-1b"] And Terraform keeps outputting every single time: availability_zones.#: "3" => "1" (forces new resource)
availability_zones.1924028850: "eu-west-1b" => "eu-west-1b"
availability_zones.3953592328: "eu-west-1a" => "" (forces new resource)
availability_zones.94988580: "eu-west-1c" => "" (forces new resource) Is there a temporary workaround to this, if not a fix? I only need to use one availability zone. Thank you. |
Here is what I ended up doing... First, we set up as many subnets as there are AZ's available in the region, but no more than 4. In this example, the provider sets the region based on a variable from the tfvars file, and we set up (up to) four /27 subnets within a /24 CIDR block. The /24 block to use is also set in a variable in the tfvars file.
Then, we set up the db_subnet to use however many subnets were created:
Finally, in the aws_rds_cluster setup, we specify the db_subnet_group, but do not specify any availability zones at all. AWS spreads the Aurora assets across the subnets as it sees fit. This always works on re-apply and works in different regions with different numbers of availability zones. |
We were observing changed "id" and "availability_zones" fields according to |
This issue has been automatically migrated to hashicorp/terraform-provider-aws#9760 because it looks like an issue with that provider. If you believe this is not an issue with the provider, please reply to hashicorp/terraform-provider-aws#9760. |
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further. |
We inspected our plan and realized Terraform keeps wanting to replace our RDS instances although their name, AZ have not changed, see below plan
The line
identifier: "group1-site1-live-5-5-6-public1" => "group1-site1-live-5-5-6-public1"
shows that no ID changes, and if we apply this it turns out that even availability zone does not change. But Terraform plan reports
id: "group1-site1-live-5-5-6-public1" => <computed> (forces new resource)
that makes me confused about the difference between
id
andidentifier
?How/why this plan could happen? We have another RDS instance which is almost the same as above but it's not in a list (so, count = 1) and the plan does not touch that instance (which is correct behaviour). Is this a bug?
There's a note that these existing RDS instances were imported to the state manually like following
Terraform version is 0.10.7, we tried with 0.10.8 and 0.11 and it's still the same plan.
The text was updated successfully, but these errors were encountered: