New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: aws_db_instance ca_cert_identifier field applied but not reflected #33546
Comments
Community NoteVoting for Prioritization
Volunteering to Work on This Issue
|
Also occurs with 5.17 |
Rebooting isn't the same thing as having maintenance run. If you want the cert update to take effect before the next RDS maintenance cycle, you must use the |
Same problem but I can make it work by applying twice the same terraform configuration
This is a bug because the CA should be set at first run |
For the documentation, it would be supremely helpful if the arguments indicated which would be applied immediately regardless of the 'apply_immediately' setting, and which would be queued up for affect during the next maintenance window unless the apply_immediately setting was true. Though this is likely available in the aws docs... (confirmed) For this bug itself, it also affects tf core 1.2.2, and aws provider 5.18.1. NOTICE: For anyone who deploying RDS instances with blue_green_update enabled, with read replicas, and who needs to update away from 'rds-ca-2019', AWS will use the default certificate when deploying the new replicas during a blue-green update. This means that 'rds-ca-2019' will come back after such an update.
Note that the modify-certificates doc states that this update is temporary, but it appears that "temporary" is until the certificate expires. Which for rsa2048 looks like this: |
This is likely to become a more pressing matter as AWS approaches its cert transition:
It looks like reminders are going out monthly, with the most recent one hitting affected accounts last Friday (4 days ago). |
I'm facing the same issue. |
Is this still a bug or we can use the |
This is still an issue. I have multiple documentdb instances running with rds-ca-2019, and updating it to rds-ca-rsa2048-g1 via terraform apply does not have any effect. The instances keep the old certificate, and there is no planned maintenance visible (No Pending maintenance found.) |
This is still an issue for me as well. I believe this is a bug, as changing |
So what's the best way to update the certificate on RDS in this situation? I mean, if we change the certificate with the CLI or from the AWS UI itself, will terraform see the change without broken the pipeline? This seems a bit critical, also because I suppose the certificate update will bring a downtime of the DB. |
Hello From what I have seen is when there is a change in To update
Regarding downtime, it depends on the engine and engine version, you can use below CLI command with desired engine, engine version and check SupportsCertificateRotationWithoutRestart. If SupportsCertificateRotationWithoutRestart is true there will be no downtime.
From now the terraform state will not have any issues and will match between actual and provided certificate. Thank you! |
We are also experiencing a similar issue when setting up DocumentDB from scratch. No matter what we set Using EDIT: Is it possible to automate the above procedure mentioned by @Kammula280? To apply pending changes with Terraform and remove manual work. |
Terraform Core Version
1.5.7
AWS Provider Version
5.14.0
Affected Resource(s)
aws_db_instance
Expected Behavior
RDS AWS instance should have rds-ca-rsa2048-g1 CA.
Actual Behavior
RDS AWS instance remains on rds-ca-2019 .
Relevant Error/Panic Output Snippet
Terraform Configuration Files
Steps to Reproduce
For a database with current ca_cert_identifier "rds-ca-2019" , update to "rds-ca-rsa2048-g1".
Debug Output
build 20-Sep-2023 10:59:14 �[0m�[0m�[1maws_s3_bucket_policy.flow_logs_policy: Modifying... [id=3556xxx-vpc-xxx]�[0m�[0m
build 20-Sep-2023 10:59:14 �[0m�[1mmodule.rds.aws_db_instance.db: Modifying... [id=db-YMBM3QDXXXX]�[0m�[0m
build 20-Sep-2023 10:59:14 �[0m�[1maws_s3_bucket_policy.flow_logs_policy: Modifications complete after 0s [id=3556xxx-vpc-0ebb17caaf2062b8c-flow-log-kerr-mg2-dev]�[0m
build 20-Sep-2023 10:59:24 �[0m�[1mmodule.rds.aws_db_instance.db: Still modifying... [id=db-YMBM3QDXXXX, 10s elapsed]�[0m�[0m
build 20-Sep-2023 10:59:34 �[0m�[1mmodule.rds.aws_db_instance.db: Still modifying... [id=db-YMBM3QDXXXX, 20s elapsed]�[0m�[0m
build 20-Sep-2023 10:59:44 �[0m�[1mmodule.rds.aws_db_instance.db: Still modifying... [id=db-YMBM3QDXXXX, 30s elapsed]�[0m�[0m
build 20-Sep-2023 10:59:54 �[0m�[1mmodule.rds.aws_db_instance.db: Still modifying... [id=db-YMBM3QDXXXX, 40s elapsed]�[0m�[0m
build 20-Sep-2023 11:00:04 �[0m�[1mmodule.rds.aws_db_instance.db: Still modifying... [id=db-YMBM3QDXXXX, 50s elapsed]�[0m�[0m
build 20-Sep-2023 11:00:14 �[0m�[1mmodule.rds.aws_db_instance.db: Still modifying... [id=db-YMBM3QDXXXX, 1m0s elapsed]�[0m�[0m
build 20-Sep-2023 11:00:24 �[0m�[1mmodule.rds.aws_db_instance.db: Still modifying... [id=db-YMBM3QDXXXX, 1m10s elapsed]�[0m�[0m
build 20-Sep-2023 11:00:34 �[0m�[1mmodule.rds.aws_db_instance.db: Still modifying... [id=db-YMBM3QDXXXX, 1m20s elapsed]�[0m�[0m
build 20-Sep-2023 11:00:35 �[0m�[1mmodule.rds.aws_db_instance.db: Modifications complete after 1m21s [id=db-YMBM3QDXXXX]�[0m
Panic Output
There is no panic.
Important Factoids
No response
References
No response
Would you like to implement a fix?
No
The text was updated successfully, but these errors were encountered: