Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AWS Aurora RDS wants to re-create after every apply when supplied with a major engine version #4777

Open
ghost opened this issue Jun 7, 2018 · 9 comments
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.

Comments

@ghost
Copy link

ghost commented Jun 7, 2018

This issue was originally opened by @AwkwardBen as hashicorp/terraform#18200. It was migrated here as a result of the provider split. The original body of the issue is below.


Essential when I use engine_version: "9.6" the resource get's created ok but after every subsquent terraform apply it will get destroyed and re-created. I have the following module:

module "aurora-postgres-test-rds" {
  source = "git::https://github.com/UKHomeOffice/acp-tf-rds?ref=v0.0.8"

  name                        = "testaurora"
  database_name               = "testaurora"
  number_of_aurora_instances  = "1"
  allocated_storage           = "10"
  backup_retention_period     = "7"
  backup_window               = "22:00-23:59"
  cidr_blocks                 = ["${values(var.compute_cidrs)}"]
  vpc_id                      = "${var.vpc_id}"
  subnet_ids                  = ["${data.aws_subnet_ids.private.ids}"]
  database_password           = "password123"
  database_port               = "5432"
  database_user               = "root"
  db_parameter_family         = "aurora-postgresql9.6"
  db_cluster_parameter_family = "aurora-postgresql9.6"
  dns_zone                    = "${var.dns_zone}"
  engine_type                 = "aurora-postgresql"
  engine_version              = "9.6"
  environment                 = "${var.environment}"
  instance_class              = "db.r4.16xlarge"
  storage_encrypted           = "true"

}

Which breaks down to the following resources in inside the acp-tf-rds module:

resource "aws_rds_cluster" "aurora_cluster" {
  # aurora = MySQL 5.6-compatible, aurora-mysql = MySQL 5.7-compatible
  count = "${var.engine_type == "aurora" || var.engine_type == "aurora-mysql" || var.engine_type == "aurora-postgresql" ? 1 : 0}"

  backup_retention_period         = "${var.backup_retention_period}"
  cluster_identifier              = "${var.name}"
  database_name                   = "${var.name}"
  db_cluster_parameter_group_name = "${aws_rds_cluster_parameter_group.db.id}"
  db_subnet_group_name            = "${aws_db_subnet_group.db.name}"
  engine                          = "${var.engine_type}"
  engine_version                  = "${var.engine_version}"
  master_password                 = "${var.database_password}"
  master_username                 = "${var.database_user}"
  port                            = "${var.database_port}"
  preferred_backup_window         = "${var.backup_window}"
  skip_final_snapshot             = "${var.skip_final_snapshot}"
  storage_encrypted               = "${var.storage_encrypted}"
  vpc_security_group_ids          = ["${aws_security_group.db.id}"]
}

# Aurora cluster instance
resource "aws_rds_cluster_instance" "aurora_cluster_instance" {
  count = "${var.engine_type == "aurora" || var.engine_type == "aurora-mysql" || var.engine_type == "aurora-postgresql" ? var.number_of_aurora_instances : 0}"

  auto_minor_version_upgrade = "${var.auto_minor_version_upgrade}"
  cluster_identifier         = "${aws_rds_cluster.aurora_cluster.id}"
  db_subnet_group_name       = "${aws_db_subnet_group.db.name}"
  db_parameter_group_name    = "${aws_db_parameter_group.db.id}"
  engine                     = "${var.engine_type}"
  engine_version             = "${var.engine_version}"
  identifier                 = "${var.name}${var.number_of_aurora_instances != 1 ? "-${count.index}" : "" }"
  instance_class             = "${var.instance_class}"
  publicly_accessible        = false
  tags                       = "${merge(var.tags, map("Name", format("%s-%s", var.environment, var.name)), map("Env", var.environment))}"
}

If instead, I use engine_version: "9.6.6" then this issue doesn't occur however with minor updates being enabled inside AWS I would prefer to use 9.6.

Terraform Version

$ terraform -v
Terraform v0.11.5

Expected Behavior

The expectation is for the resource not be destroyed and created again after each terraform apply. Also especially when changes have nothing to do with the module.

Actual Behavior

Running another terraform apply will re-create the Aurora AWS resources.

Additional Context

Terraform is running within a container quay.io/ukhomeofficedigital/terraform-toolset:v0.2.1 and get's deployed as part of drone.io pipeline.

@radeksimko radeksimko changed the title AWS Aurora rds wants to re-create after every apply when supplyed with a major engine version AWS Aurora RDS wants to re-create after every apply when supplied with a major engine version Jun 13, 2018
@radeksimko radeksimko added bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service. labels Jun 13, 2018
@arwilczek90
Copy link
Contributor

This also seems to happen (atleast with aurora postgres) when you change the minor version from lower to higher instead of pushing an upgrade.

@bflad
Copy link
Contributor

bflad commented Oct 5, 2018

Support for in-place updates of the engine_version argument (with an outage, according to the RDS API, presumably while it reboots instances) has been merged and will release with version 1.40.0 of the AWS provider, likely middle of next week.

The original issue of suppressing the difference of engine_version MAJOR.MINOR in the Terraform configuration I believe is still outstanding so keeping this issue open.

@sstarcher
Copy link

Just ran into this issue today. We were use to AWS RDS supporting this and a developer accidentally blew away the database for aurora, because of this bug.

@dnorth98
Copy link

I've also just ran into this using provider version 2.48.0 while updating to postgres 10.11 from 10.7. Terraform wanted to destroy the nodes and create new so I've just done an in-place upgrade via the console and then updated my tfvars file with the new version.

it is true that a MAJOR upgrade is a destroy but a minor can be done in place...

@tschmitt
Copy link

And confirming this is still a thing. Upgrading from 9.6.9 ==> 9.6.18 using provider 2.70.0. Reverted to using aws api for the upgrade with a 10 second outage.

@justinretzolk
Copy link
Member

Hi all 👋 Thank you for taking the time to file this and for the continued discussion. As of v3.36.0 of the provider, updates to the engine_version parameter of aws_rds_cluster_instance resources no longer forces replacement of the resource. With that in mind, can you verify whether or not you're still experiencing this behavior?

@justinretzolk justinretzolk added the waiting-response Maintainers are waiting on response from community or contributor. label Oct 14, 2021
@Benvorth
Copy link

I still have this issue with provider hashicorp/aws version 4.27.0

@github-actions github-actions bot removed the waiting-response Maintainers are waiting on response from community or contributor. label Aug 25, 2022
@Jai-cloud
Copy link

I still have this issue with provider hashicorp/aws version 4.27.0

Hello my friend I am facing the same issue. Have you got any solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Addresses a defect in current functionality. service/rds Issues and PRs that pertain to the rds service.
Projects
None yet
Development

No branches or pull requests

10 participants