Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ERROR when creating an Aurora RDS global cluster and snapshot_identifier is defined #10965

Open
jamengual opened this issue Nov 21, 2019 · 1 comment

Comments

@jamengual
Copy link

@jamengual jamengual commented Nov 21, 2019

Hi.

I have been working with terraform to create RDS global cluster without many issues until now.

I'm using the same code I use to create my prod global cluster to create another cluster base on the original prod cluster snapshot but when snapshot_identifier is provided the cluster gets created as a regional cluster and it is not attached to the newly created global cluster BUT if I use exactly the same code without specifing the snapshot_identifier the global cluster is created and the new regional rds cluster gets atached inmediatly to the global cluster.

Exactly the sam behavior happens when using the console but in the console I can successfully create the global cluster from the snapshot.

Keep in mind that I replaced some text to hide personal information

the sample code :

# Global mydata RDS cluster

resource "aws_rds_global_cluster" "mydata_clone" {
  count                     = var.create_clone ? 1 : 0
  engine_version            = "5.6.10a"
  global_cluster_appntifier = "clone-test-mydata-global"
  storage_encrypted         = true
  deletion_protection       = false
  provider                  = aws.primary
}


module "test_mydata_us_east_2_clone_cluster" {
  source         = "git::https://github.com/cloudposse/terraform-aws-rds-cluster.git?ref=0.17.0"
  enabled        = var.create_clone
  engine         = "aurora"
  engine_version = "5.6.10a"
  cluster_family = "aurora5.6"
  cluster_size   = 1
  namespace      = var.namespace
  stage          = var.environment
  name           = "us-east-2-${var.mydata_name}-clone"
#   admin_user     = var.mydata_db_user
#   admin_password = random_string.db_password.result
  db_name        = var.mydata_db_name
  instance_type  = "db.r5.2xlarge"
  vpc_id         = local.vpc_id
  security_groups = [
    local.sg-web-server-us-east-2,
    local.sg-app-scan-us-east-2,
    local.sg-app-scan-us-east-2,
    local.sg-app-scan-us-east-2
  ]
  allowed_cidr_blocks                 = var.mydata_allowed_cidr_blocks
  subnets                             = local.private_subnet_ids
  engine_mode                         = "global"
  global_cluster_appntifier           = join("", aws_rds_global_cluster.mydata_clone.*.id)
  iam_database_authentication_enabled = true
  storage_encrypted                   = true
  deletion_protection                 = false
  iam_roles                           = ["${aws_iam_role.AuroraAccessToDataBuckets.arn}"]
  ##enabled_cloudwatch_logs_exports     = ["audit", "error", "general", "slowquery"]
  tags                                = local.complete_tags
  snapshot_appntifier                 = var.snapshot_appntifier
  skip_final_snapshot                 = true

  # DNS setting
  cluster_dns_name = "test-${var.environment}-mydata-writer-clone-us-east-2"
  reader_dns_name  = "test-${var.environment}-mydata-reader-clone-us-east-2"
  zone_id          = data.aws_route53_zone.ds_example_com.zone_id

  # enable monitoring every 30 seconds
  ##rds_monitoring_interval = 15

  # reference iam role created above
  ##rds_monitoring_role_arn      = aws_iam_role.mydata_enhanced_monitoring.arn
  ##performance_insights_enabled = true

  cluster_parameters = [
    {
      name         = "binlog_format"
      value        = "row"
      apply_method = "pending-reboot"
    },
    {
      apply_method = "immediate"
      name         = "max_allowed_packet"
      value        = "16777216"
    },
    {
      apply_method = "pending-reboot"
      name         = "performance_schema"
      value        = "1"
    },
    {
      apply_method = "immediate"
      name         = "server_audit_logging"
      value        = "0"
    }
  ]
  providers = {
    aws = aws.primary
  }
}

Plan output :

    + resource "aws_rds_global_cluster" "mydata_clone" {
        + arn                        = (known after apply)
        + deletion_protection        = false
        + engine                     = "aurora"
        + engine_version             = "5.6.10a"
        + global_cluster_identifier  = "clone-test-mydata-global"
        + global_cluster_resource_id = (known after apply)
        + id                         = (known after apply)
        + storage_encrypted          = true
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_db_parameter_group.default[0] will be created
    + resource "aws_db_parameter_group" "default" {
        + arn         = (known after apply)
        + description = "DB instance parameter group"
        + family      = "aurora5.6"
        + id          = (known after apply)
        + name        = "test-staging-us-east-2-mydata-clone"
        + name_prefix = (known after apply)
        + tags        = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_db_subnet_group.default[0] will be created
    + resource "aws_db_subnet_group" "default" {
        + arn         = (known after apply)
        + description = "Allowed subnets for DB cluster instances"
        + id          = (known after apply)
        + name        = "test-staging-us-east-2-mydata-clone"
        + name_prefix = (known after apply)
        + subnet_ids  = [
            + "subnet-1111111111111
            + "subnet-1111111111111
            + "subnet-1111111111111
          ]
        + tags        = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_rds_cluster.default[0] will be created
    + resource "aws_rds_cluster" "default" {
        + apply_immediately                   = true
        + arn                                 = (known after apply)
        + availability_zones                  = (known after apply)
        + backup_retention_period             = 5
        + cluster_identifier                  = "test-staging-us-east-2-mydata-clone"
        + cluster_identifier_prefix           = (known after apply)
        + cluster_members                     = (known after apply)
        + cluster_resource_id                 = (known after apply)
        + copy_tags_to_snapshot               = false
        + database_name                       = "testdb"
        + db_cluster_parameter_group_name     = "test-staging-us-east-2-mydata-clone"
        + db_subnet_group_name                = "test-staging-us-east-2-mydata-clone"
        + deletion_protection                 = false
        + enabled_cloudwatch_logs_exports     = []
        + endpoint                            = (known after apply)
        + engine                              = "aurora"
        + engine_mode                         = "global"
        + engine_version                      = "5.6.10a"
        + final_snapshot_identifier           = "test-staging-us-east-2-mydata-clone"
        + global_cluster_identifier           = (known after apply)
        + hosted_zone_id                      = (known after apply)
        + iam_database_authentication_enabled = true
        + iam_roles                           = [
            + "arn:aws:iam::1111111111:role/AuroraAccessToDataBuckets",
          ]
        + id                                  = (known after apply)
        + kms_key_id                          = (known after apply)
        + master_username                     = "admin"
        + port                                = (known after apply)
        + preferred_backup_window             = "07:00-09:00"
        + preferred_maintenance_window        = "wed:03:00-wed:04:00"
        + reader_endpoint                     = (known after apply)
        + skip_final_snapshot                 = true
        + snapshot_identifier                 = "snapshot-prep-for-data-load"
        + storage_encrypted                   = true
        + tags                                = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
        + vpc_security_group_ids              = (known after apply)
      }
  
    # module.test_mydata_us_east_2_clone_cluster.aws_rds_cluster_instance.default[0] will be created
    + resource "aws_rds_cluster_instance" "default" {
        + apply_immediately               = (known after apply)
        + arn                             = (known after apply)
        + auto_minor_version_upgrade      = true
        + availability_zone               = (known after apply)
        + cluster_identifier              = (known after apply)
        + copy_tags_to_snapshot           = false
        + db_parameter_group_name         = "test-staging-us-east-2-mydata-clone"
        + db_subnet_group_name            = "test-staging-us-east-2-mydata-clone"
        + dbi_resource_id                 = (known after apply)
        + endpoint                        = (known after apply)
        + engine                          = "aurora"
        + engine_version                  = "5.6.10a"
        + id                              = (known after apply)
        + identifier                      = "test-staging-us-east-2-mydata-clone-1"
        + identifier_prefix               = (known after apply)
        + instance_class                  = "db.r5.2xlarge"
        + kms_key_id                      = (known after apply)
        + monitoring_interval             = 0
        + monitoring_role_arn             = (known after apply)
        + performance_insights_enabled    = false
        + performance_insights_kms_key_id = (known after apply)
        + port                            = (known after apply)
        + preferred_backup_window         = (known after apply)
        + preferred_maintenance_window    = (known after apply)
        + promotion_tier                  = 0
        + publicly_accessible             = false
        + storage_encrypted               = (known after apply)
        + tags                            = {
            + "Name"           = "test-staging-us-east-2-mydata-clone"
            + "Namespace"      = "test"
            + "Stage"          = "staging"
            + "environment"    = "staging"
            + "expiration"     = "never"
          }
        + writer                          = (known after apply)
      }

Version :
terraform_0.12.16
provider "local" (hashicorp/local) 1.4.0
provider "aws" (hashicorp/aws) 2.38.0...
provider "null" (hashicorp/null) 2.1.2...
provider "template" (hashicorp/template) 2.1.2
provider "mysql" (terraform-providers/mysql) 1.9.0
provider "random" (hashicorp/random) 2.2.1

Expected Behavior

new global cluster should be created and a new rds cluster should have been attached to the global cluster after being created from the snapshot.

Actual Behavior

A global cluster is created and a standalone RDS cluster is created from the snapshot but the RDS cluster is not attached to the Global cluster

When created without using snapshot_identifier the global cluster an RDS clusters are created correctly.

the error when trying to re-apply the terraform is :

Error: Existing RDS Clusters cannot be added to an existing RDS Global Cluster

@hashibot hashibot bot added the service/rds label Nov 21, 2019
@jamengual

This comment has been minimized.

Copy link
Author

@jamengual jamengual commented Nov 28, 2019

Hi, anything I can help for someone to take a look at this ?

@jamengual jamengual changed the title Inconsitancy when creating Aurora RDS global cluster when snapshot_identifier is defined ERROR when creating an Aurora RDS global cluster and snapshot_identifier is defined Dec 2, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
1 participant
You can’t perform that action at this time.