Skip to content

Commit

Permalink
feat: Add global_upgradable variable to support major engine version …
Browse files Browse the repository at this point in the history
…upgrades to global clusters.

Fix #425. A much more thorough explanation is provided in examples/global-cluster/README.md.

Update documentation via pre-commit for global_upgradable.
  • Loading branch information
theherk committed Mar 3, 2024
1 parent 17ddf72 commit 14ac126
Show file tree
Hide file tree
Showing 6 changed files with 181 additions and 19 deletions.
7 changes: 7 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,13 @@

All notable changes to this project will be documented in this file.

## [9.2.0](https://github.com/terraform-aws-modules/terraform-aws-rds-aurora/compare/v9.1.0...v9.2.0) (2024-03-03)


### Features

* Add `global_upgradable` variable to support major version upgrades to global clusters. ([#425](https://github.com/terraform-aws-modules/terraform-aws-rds-aurora/issues/425))

## [9.1.0](https://github.com/terraform-aws-modules/terraform-aws-rds-aurora/compare/v9.0.2...v9.1.0) (2024-02-16)


Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -247,6 +247,7 @@ No modules.
| [aws_db_subnet_group.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/db_subnet_group) | resource |
| [aws_iam_role.rds_enhanced_monitoring](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role) | resource |
| [aws_iam_role_policy_attachment.rds_enhanced_monitoring](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/iam_role_policy_attachment) | resource |
| [aws_rds_cluster.global_upgradable](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster) | resource |
| [aws_rds_cluster.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster) | resource |
| [aws_rds_cluster_activity_stream.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster_activity_stream) | resource |
| [aws_rds_cluster_endpoint.this](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster_endpoint) | resource |
Expand Down Expand Up @@ -323,6 +324,7 @@ No modules.
| <a name="input_engine_version"></a> [engine\_version](#input\_engine\_version) | The database engine version. Updating this argument results in an outage | `string` | `null` | no |
| <a name="input_final_snapshot_identifier"></a> [final\_snapshot\_identifier](#input\_final\_snapshot\_identifier) | The name of your final DB snapshot when this DB cluster is deleted. If omitted, no final snapshot will be made | `string` | `null` | no |
| <a name="input_global_cluster_identifier"></a> [global\_cluster\_identifier](#input\_global\_cluster\_identifier) | The global cluster identifier specified on `aws_rds_global_cluster` | `string` | `null` | no |
| <a name="input_global_upgradable"></a> [global\_upgradable](#input\_global\_upgradable) | True if `engine_version` should be ignored for the cluster. This is only relevant if you want to be able to upgrade a member cluster of a global cluster. If this is enabled after creation, you'll need to `terraform mv` to move the resource to this new resource address. | `bool` | `false` | no |
| <a name="input_iam_database_authentication_enabled"></a> [iam\_database\_authentication\_enabled](#input\_iam\_database\_authentication\_enabled) | Specifies whether or mappings of AWS Identity and Access Management (IAM) accounts to database accounts is enabled | `bool` | `null` | no |
| <a name="input_iam_role_description"></a> [iam\_role\_description](#input\_iam\_role\_description) | Description of the monitoring role | `string` | `null` | no |
| <a name="input_iam_role_force_detach_policies"></a> [iam\_role\_force\_detach\_policies](#input\_iam\_role\_force\_detach\_policies) | Whether to force detaching any policies the monitoring role has before destroying it | `bool` | `null` | no |
Expand Down
31 changes: 31 additions & 0 deletions examples/global-cluster/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,37 @@ $ terraform apply

Note that this example may create resources which cost money. Run `terraform destroy` when you don't need these resources.

## Upgrading major version of global clusters

Upgrading the major version of global clusters is possible, but due to a limitation in terraform, it requires some special consideration. As [documented in the provider](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_global_cluster#upgrading-engine-versions):

> When you upgrade the version of an aws_rds_global_cluster, Terraform will attempt to in-place upgrade the engine versions of all associated clusters. Since the aws_rds_cluster resource is being updated through the aws_rds_global_cluster, you are likely to get an error (Provider produced inconsistent final plan). To avoid this, use the lifecycle ignore_changes meta argument as shown below on the aws_rds_cluster.
In order to accomplish this in a module that is otherwise used for non-global clusters, we must duplicate the cluster resource. The limitation that requires this is, terraform [lifecycle meta-arguments](https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#literal-values-only) can contain only literal values:

> The lifecycle settings all affect how Terraform constructs and traverses the dependency graph. As a result, only literal values can be used because the processing happens too early for arbitrary expression evaluation.
That means, that to ignore the `engine_version` in some cases but not in others, we need another resource. So, if you intend to upgrade your global cluster in the future, you must set the new variable `global_upgradable` to `true`.

### Migrating the resource

If you already have a global cluster created with this module, and would like to make use of this feature, you'll need to move the cluster resource. That can be done with the cli:

```sh
terraform state mv 'module.this.aws_rds_cluster.this[0]' 'module.this.aws_rds_cluster.global_upgradable[0]'
```

Or via a new [moved block](https://developer.hashicorp.com/terraform/language/modules/develop/refactoring#moved-block-syntax):

```tf
moved {
from = module.this.aws_rds_cluster.this[0]
to = module.this.aws_rds_cluster.global_upgradable[0]
}
```

After that, changing the major version should work without issue.

<!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
## Requirements

Expand Down
130 changes: 123 additions & 7 deletions main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ resource "aws_db_subnet_group" "this" {
################################################################################

resource "aws_rds_cluster" "this" {
count = local.create ? 1 : 0
count = local.create && !var.global_upgradable ? 1 : 0

allocated_storage = var.allocated_storage
allow_major_version_upgrade = var.allow_major_version_upgrade
Expand Down Expand Up @@ -152,6 +152,122 @@ resource "aws_rds_cluster" "this" {
depends_on = [aws_cloudwatch_log_group.this]
}

resource "aws_rds_cluster" "global_upgradable" {
count = local.create && var.global_upgradable ? 1 : 0

allocated_storage = var.allocated_storage
allow_major_version_upgrade = var.allow_major_version_upgrade
apply_immediately = var.apply_immediately
availability_zones = var.availability_zones
backup_retention_period = var.backup_retention_period
backtrack_window = local.backtrack_window
cluster_identifier = var.cluster_use_name_prefix ? null : var.name
cluster_identifier_prefix = var.cluster_use_name_prefix ? "${var.name}-" : null
cluster_members = var.cluster_members
copy_tags_to_snapshot = var.copy_tags_to_snapshot
database_name = var.is_primary_cluster ? var.database_name : null
db_cluster_instance_class = var.db_cluster_instance_class
db_cluster_parameter_group_name = var.create_db_cluster_parameter_group ? aws_rds_cluster_parameter_group.this[0].id : var.db_cluster_parameter_group_name
db_instance_parameter_group_name = var.allow_major_version_upgrade ? var.db_cluster_db_instance_parameter_group_name : null
db_subnet_group_name = local.db_subnet_group_name
delete_automated_backups = var.delete_automated_backups
deletion_protection = var.deletion_protection
enable_global_write_forwarding = var.enable_global_write_forwarding
enabled_cloudwatch_logs_exports = var.enabled_cloudwatch_logs_exports
enable_http_endpoint = var.enable_http_endpoint
engine = var.engine
engine_mode = var.engine_mode
engine_version = var.engine_version
final_snapshot_identifier = var.final_snapshot_identifier
global_cluster_identifier = var.global_cluster_identifier
iam_database_authentication_enabled = var.iam_database_authentication_enabled
# iam_roles has been removed from this resource and instead will be used with aws_rds_cluster_role_association below to avoid conflicts per docs
iops = var.iops
kms_key_id = var.kms_key_id
manage_master_user_password = var.global_cluster_identifier == null && var.manage_master_user_password ? var.manage_master_user_password : null
master_user_secret_kms_key_id = var.global_cluster_identifier == null && var.manage_master_user_password ? var.master_user_secret_kms_key_id : null
master_password = var.is_primary_cluster && !var.manage_master_user_password ? var.master_password : null
master_username = var.is_primary_cluster ? var.master_username : null
network_type = var.network_type
port = local.port
preferred_backup_window = local.is_serverless ? null : var.preferred_backup_window
preferred_maintenance_window = local.is_serverless ? null : var.preferred_maintenance_window
replication_source_identifier = var.replication_source_identifier

dynamic "restore_to_point_in_time" {
for_each = length(var.restore_to_point_in_time) > 0 ? [var.restore_to_point_in_time] : []

content {
restore_to_time = try(restore_to_point_in_time.value.restore_to_time, null)
restore_type = try(restore_to_point_in_time.value.restore_type, null)
source_cluster_identifier = restore_to_point_in_time.value.source_cluster_identifier
use_latest_restorable_time = try(restore_to_point_in_time.value.use_latest_restorable_time, null)
}
}

dynamic "s3_import" {
for_each = length(var.s3_import) > 0 && !local.is_serverless ? [var.s3_import] : []

content {
bucket_name = s3_import.value.bucket_name
bucket_prefix = try(s3_import.value.bucket_prefix, null)
ingestion_role = s3_import.value.ingestion_role
source_engine = "mysql"
source_engine_version = s3_import.value.source_engine_version
}
}

dynamic "scaling_configuration" {
for_each = length(var.scaling_configuration) > 0 && local.is_serverless ? [var.scaling_configuration] : []

content {
auto_pause = try(scaling_configuration.value.auto_pause, null)
max_capacity = try(scaling_configuration.value.max_capacity, null)
min_capacity = try(scaling_configuration.value.min_capacity, null)
seconds_until_auto_pause = try(scaling_configuration.value.seconds_until_auto_pause, null)
timeout_action = try(scaling_configuration.value.timeout_action, null)
}
}

dynamic "serverlessv2_scaling_configuration" {
for_each = length(var.serverlessv2_scaling_configuration) > 0 && var.engine_mode == "provisioned" ? [var.serverlessv2_scaling_configuration] : []

content {
max_capacity = serverlessv2_scaling_configuration.value.max_capacity
min_capacity = serverlessv2_scaling_configuration.value.min_capacity
}
}

skip_final_snapshot = var.skip_final_snapshot
snapshot_identifier = var.snapshot_identifier
source_region = var.source_region
storage_encrypted = var.storage_encrypted
storage_type = var.storage_type
tags = merge(var.tags, var.cluster_tags)
vpc_security_group_ids = compact(concat([try(aws_security_group.this[0].id, "")], var.vpc_security_group_ids))

timeouts {
create = try(var.cluster_timeouts.create, null)
update = try(var.cluster_timeouts.update, null)
delete = try(var.cluster_timeouts.delete, null)
}

lifecycle {
ignore_changes = [
# See https://github.com/terraform-aws-modules/terraform-aws-rds-aurora/issues/425
engine_version,
# See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_cluster#replication_source_identifier
# Since this is used either in read-replica clusters or global clusters, this should be acceptable to specify
replication_source_identifier,
# See docs here https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/rds_global_cluster#new-global-cluster-from-existing-db-cluster
global_cluster_identifier,
snapshot_identifier,
]
}

depends_on = [aws_cloudwatch_log_group.this]
}

################################################################################
# Cluster Instance(s)
################################################################################
Expand All @@ -163,7 +279,7 @@ resource "aws_rds_cluster_instance" "this" {
auto_minor_version_upgrade = try(each.value.auto_minor_version_upgrade, var.auto_minor_version_upgrade)
availability_zone = try(each.value.availability_zone, null)
ca_cert_identifier = var.ca_cert_identifier
cluster_identifier = aws_rds_cluster.this[0].id
cluster_identifier = var.global_upgradable ? aws_rds_cluster.global_upgradable[0].id : aws_rds_cluster.this[0].id
copy_tags_to_snapshot = try(each.value.copy_tags_to_snapshot, var.copy_tags_to_snapshot)
db_parameter_group_name = var.create_db_parameter_group ? aws_db_parameter_group.this[0].id : try(each.value.db_parameter_group_name, var.db_parameter_group_name)
db_subnet_group_name = local.db_subnet_group_name
Expand Down Expand Up @@ -198,7 +314,7 @@ resource "aws_rds_cluster_endpoint" "this" {
for_each = { for k, v in var.endpoints : k => v if local.create && !local.is_serverless }

cluster_endpoint_identifier = each.value.identifier
cluster_identifier = aws_rds_cluster.this[0].id
cluster_identifier = var.global_upgradable ? aws_rds_cluster.global_upgradable[0].id : aws_rds_cluster.this[0].id
custom_endpoint_type = each.value.type
excluded_members = try(each.value.excluded_members, null)
static_members = try(each.value.static_members, null)
Expand All @@ -216,7 +332,7 @@ resource "aws_rds_cluster_endpoint" "this" {
resource "aws_rds_cluster_role_association" "this" {
for_each = { for k, v in var.iam_roles : k => v if local.create }

db_cluster_identifier = aws_rds_cluster.this[0].id
db_cluster_identifier = var.global_upgradable ? aws_rds_cluster.global_upgradable[0].id : aws_rds_cluster.this[0].id
feature_name = each.value.feature_name
role_arn = each.value.role_arn
}
Expand Down Expand Up @@ -275,7 +391,7 @@ resource "aws_appautoscaling_target" "this" {

max_capacity = var.autoscaling_max_capacity
min_capacity = var.autoscaling_min_capacity
resource_id = "cluster:${aws_rds_cluster.this[0].cluster_identifier}"
resource_id = "cluster:${var.global_upgradable ? aws_rds_cluster.global_upgradable[0].cluster_identifier : aws_rds_cluster.this[0].cluster_identifier}"
scalable_dimension = "rds:cluster:ReadReplicaCount"
service_namespace = "rds"

Expand All @@ -293,7 +409,7 @@ resource "aws_appautoscaling_policy" "this" {

name = var.autoscaling_policy_name
policy_type = "TargetTrackingScaling"
resource_id = "cluster:${aws_rds_cluster.this[0].cluster_identifier}"
resource_id = "cluster:${var.global_upgradable ? aws_rds_cluster.global_upgradable[0].cluster_identifier : aws_rds_cluster.this[0].cluster_identifier}"
scalable_dimension = "rds:cluster:ReadReplicaCount"
service_namespace = "rds"

Expand Down Expand Up @@ -429,7 +545,7 @@ resource "aws_cloudwatch_log_group" "this" {
resource "aws_rds_cluster_activity_stream" "this" {
count = local.create && var.create_db_cluster_activity_stream ? 1 : 0

resource_arn = aws_rds_cluster.this[0].arn
resource_arn = var.global_upgradable ? aws_rds_cluster.global_upgradable[0].arn : aws_rds_cluster.this[0].arn
mode = var.db_cluster_activity_stream_mode
kms_key_id = var.db_cluster_activity_stream_kms_key_id
engine_native_audit_fields_included = var.engine_native_audit_fields_included
Expand Down
24 changes: 12 additions & 12 deletions outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -13,37 +13,37 @@ output "db_subnet_group_name" {

output "cluster_arn" {
description = "Amazon Resource Name (ARN) of cluster"
value = try(aws_rds_cluster.this[0].arn, null)
value = try(aws_rds_cluster.this[0].arn, try(aws_rds_cluster.global_upgradable[0].arn, null))
}

output "cluster_id" {
description = "The RDS Cluster Identifier"
value = try(aws_rds_cluster.this[0].id, null)
value = try(aws_rds_cluster.this[0].id, try(aws_rds_cluster.global_upgradable[0].id, null))
}

output "cluster_resource_id" {
description = "The RDS Cluster Resource ID"
value = try(aws_rds_cluster.this[0].cluster_resource_id, null)
value = try(aws_rds_cluster.this[0].cluster_resource_id, try(aws_rds_cluster.global_upgradable[0].cluster_resource_id, null))
}

output "cluster_members" {
description = "List of RDS Instances that are a part of this cluster"
value = try(aws_rds_cluster.this[0].cluster_members, null)
value = try(aws_rds_cluster.this[0].cluster_members, try(aws_rds_cluster.global_upgradable[0].cluster_members, null))
}

output "cluster_endpoint" {
description = "Writer endpoint for the cluster"
value = try(aws_rds_cluster.this[0].endpoint, null)
value = try(aws_rds_cluster.this[0].endpoint, try(aws_rds_cluster.global_upgradable[0].endpoint, null))
}

output "cluster_reader_endpoint" {
description = "A read-only endpoint for the cluster, automatically load-balanced across replicas"
value = try(aws_rds_cluster.this[0].reader_endpoint, null)
value = try(aws_rds_cluster.this[0].reader_endpoint, try(aws_rds_cluster.global_upgradable[0].reader_endpoint, null))
}

output "cluster_engine_version_actual" {
description = "The running version of the cluster database"
value = try(aws_rds_cluster.this[0].engine_version_actual, null)
value = try(aws_rds_cluster.this[0].engine_version_actual, try(aws_rds_cluster.global_upgradable[0].engine_version_actual, null))
}

# database_name is not set on `aws_rds_cluster` resource if it was not specified, so can't be used in output
Expand All @@ -54,29 +54,29 @@ output "cluster_database_name" {

output "cluster_port" {
description = "The database port"
value = try(aws_rds_cluster.this[0].port, null)
value = try(aws_rds_cluster.this[0].port, try(aws_rds_cluster.global_upgradable[0].port, null))
}

output "cluster_master_password" {
description = "The database master password"
value = try(aws_rds_cluster.this[0].master_password, null)
value = try(aws_rds_cluster.this[0].master_password, try(aws_rds_cluster.global_upgradable[0].master_password, null))
sensitive = true
}

output "cluster_master_username" {
description = "The database master username"
value = try(aws_rds_cluster.this[0].master_username, null)
value = try(aws_rds_cluster.this[0].master_username, try(aws_rds_cluster.global_upgradable[0].master_username, null))
sensitive = true
}

output "cluster_master_user_secret" {
description = "The generated database master user secret when `manage_master_user_password` is set to `true`"
value = try(aws_rds_cluster.this[0].master_user_secret, null)
value = try(aws_rds_cluster.this[0].master_user_secret, try(aws_rds_cluster.global_upgradable[0].master_user_secret, null))
}

output "cluster_hosted_zone_id" {
description = "The Route53 Hosted Zone ID of the endpoint"
value = try(aws_rds_cluster.this[0].hosted_zone_id, null)
value = try(aws_rds_cluster.this[0].hosted_zone_id, try(aws_rds_cluster.global_upgradable[0].hosted_zone_id, null))
}

################################################################################
Expand Down

0 comments on commit 14ac126

Please sign in to comment.