Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Adding support for cluster engine upgrade #5010

Merged
merged 1 commit into from
Oct 5, 2018

Conversation

taganaka
Copy link
Contributor

@taganaka taganaka commented Jun 27, 2018

Reference #4777

While this first batch of changes is enough for my specific use case, I believe aws_rds_cluster_instance must be refactored as well.

Changes proposed in this pull request:

  • Set ForceNew to false on engine_version for aws_rds_cluster
  • Proposed: Deprecate engine_version and engine arguments for aws_rds_cluster_instace

I'd like your opinion about arguments deprecations.
When a RDS instance is part of a cluster, the engine and engine version are set at the cluster level.
AWS API would refuse to change an engine version of an an instance belonging to a cluster. Each changes should be done at cluster level. Instead engine and engine_version should be fetched by an already existing aws_rds_cluster resource

Output from acceptance testing:

$ make testacc TESTARGS='-run=TestAccAWSRDSCluster_EngineVersion'
==> Checking that code complies with gofmt requirements...
TF_ACC=1 go test ./... -v -run=TestAccAWSRDSCluster_EngineVersion -timeout 120m
?   	github.com/terraform-providers/terraform-provider-aws	[no test files]
=== RUN   TestAccAWSRDSCluster_EngineVersion
--- PASS: TestAccAWSRDSCluster_EngineVersion (138.35s)
=== RUN   TestAccAWSRDSCluster_EngineVersionWithPrimaryInstance
--- PASS: TestAccAWSRDSCluster_EngineVersionWithPrimaryInstance (1094.50s)
PASS
ok  	github.com/terraform-providers/terraform-provider-aws/aws	1232.885s
$

@ghost ghost added the size/M Managed by automation to categorize the size of a PR. label Jun 27, 2018
@bflad bflad added enhancement Requests to existing resources that expand the functionality or scope. service/rds Issues and PRs that pertain to the rds service. labels Jun 27, 2018
@mdlavin
Copy link
Contributor

mdlavin commented Sep 13, 2018

If anybody is interested in testing out this feature in a patched v1.36.0 version, I've made some Alpine Linux x64 binaries available here: https://github.com/lifeomic/terraform-provider-aws/releases/tag/v1.36.0_patched_f2d0f833c

@tfitch
Copy link

tfitch commented Oct 2, 2018

My team is looking for this functionality too, so thank you @taganaka for writing it up.

If you think it's ready to go I humbly suggest you remove the [WIP] in the PR title because I've heard the team at Hashicorp skips over most PRs like that because they're self described as incomplete.

bflad added a commit that referenced this pull request Oct 5, 2018
* Note aws_rds_cluster engine_version updates will cause an outage
* Revert aws_rds_cluster source_engine_version ForceNew
* Revert aws_rds_cluster_instance ForceNew
* Remove engine and engine_version from testAccAWSClusterConfig_EngineVersionWithPrimaryInstance
Copy link
Contributor

@bflad bflad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @taganaka 👋 Thanks so much for submitting this and sorry it took awhile on the review queue. This was most of the way there, but I have noted a few changes that will be occurring post-merge in (10024db) along with a documentation update to note that updating the engine_version will cause an outage (as noted by the API documentation). I lean towards deprecation of at least the engine_version argument in the aws_rds_cluster_instance resource in the future.

Testing including this pull request and 10024db:

 Tests passed: 41, ignored: 1
--- PASS: TestAccAWSRDSCluster_missingUserNameCausesError (3.28s)
--- PASS: TestAccAWSRDSCluster_namePrefix (91.37s)
--- PASS: TestAccAWSRDSCluster_takeFinalSnapshot (97.40s)
--- PASS: TestAccAWSRDSCluster_updateCloudwatchLogsExports (110.15s)
--- PASS: TestAccAWSRDSCluster_updateIamRoles (107.41s)
--- PASS: TestAccAWSRDSCluster_BacktrackWindow (110.68s)
--- PASS: TestAccAWSRDSCluster_importBasic (117.68s)
--- PASS: TestAccAWSRDSCluster_basic (117.85s)
--- PASS: TestAccAWSRDSCluster_updateTags (130.30s)
--- PASS: TestAccAWSRDSCluster_generatedName (131.81s)
--- PASS: TestAccAWSRDSCluster_encrypted (95.27s)
--- PASS: TestAccAWSRDSCluster_kmsKey (114.89s)
--- PASS: TestAccAWSRDSCluster_iamAuth (105.75s)
--- PASS: TestAccAWSRDSCluster_DeletionProtection (108.86s)
--- PASS: TestAccAWSRDSCluster_EngineMode_ParallelQuery (105.85s)
--- PASS: TestAccAWSRDSCluster_EngineVersion (107.40s)
--- PASS: TestAccAWSRDSCluster_backupsUpdate (147.94s)
--- PASS: TestAccAWSRDSCluster_EngineMode (293.12s)
--- PASS: TestAccAWSRDSCluster_ScalingConfiguration (202.07s)
--- PASS: TestAccAWSRDSCluster_Port (220.92s)
--- PASS: TestAccAWSRDSCluster_SnapshotIdentifier_EngineMode_ParallelQuery (303.68s)
--- PASS: TestAccAWSRDSCluster_SnapshotIdentifier_DeletionProtection (366.72s)
--- PASS: TestAccAWSRDSCluster_SnapshotIdentifier (393.09s)
--- PASS: TestAccAWSRDSClusterInstance_generatedName (628.99s)
--- PASS: TestAccAWSRDSCluster_SnapshotIdentifier_EngineMode_Provisioned (394.00s)
--- PASS: TestAccAWSRDSClusterInstance_withInstanceEnhancedMonitor (657.74s)
--- PASS: TestAccAWSRDSClusterInstance_namePrefix (690.94s)
--- PASS: TestAccAWSRDSClusterInstance_az (696.89s)
--- PASS: TestAccAWSRDSClusterInstance_PubliclyAccessible (709.47s)
--- PASS: TestAccAWSRDSClusterInstance_withInstancePerformanceInsights (726.79s)
--- PASS: TestAccAWSRDSCluster_SnapshotIdentifier_VpcSecurityGroupIds_Tags (329.00s)
--- PASS: TestAccAWSRDSCluster_SnapshotIdentifier_Tags (373.88s)
--- PASS: TestAccAWSRDSCluster_SnapshotIdentifier_VpcSecurityGroupIds (385.61s)
--- PASS: TestAccAWSRDSClusterInstance_importBasic (847.99s)
--- PASS: TestAccAWSRDSClusterInstance_disappears (853.20s)
--- PASS: TestAccAWSRDSClusterInstance_kmsKey (856.90s)
--- PASS: TestAccAWSRDSCluster_SnapshotIdentifier_EncryptedRestore (363.75s)
--- PASS: TestAccAWSRDSCluster_EngineVersionWithPrimaryInstance (1063.37s)
--- PASS: TestAccAWSRDSCluster_s3Restore (1400.44s)
--- PASS: TestAccAWSRDSClusterInstance_basic (1409.23s)
--- PASS: TestAccAWSRDSCluster_EncryptedCrossRegionReplication (1533.64s)

@@ -119,7 +119,7 @@ func resourceAwsRDSCluster() *schema.Resource {
"engine_version": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
ForceNew: false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: ForceNew: false is the default and can be omitted from the schema

@@ -163,7 +163,7 @@ func resourceAwsRDSCluster() *schema.Resource {
"source_engine_version": {
Type: schema.TypeString,
Required: true,
ForceNew: true,
ForceNew: false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This attribute does not relate to the changes in this pull request -- will revert on merge.

@@ -97,7 +97,7 @@ func resourceAwsRDSClusterInstance() *schema.Resource {
"engine_version": {
Type: schema.TypeString,
Optional: true,
ForceNew: true,
ForceNew: false,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While this change was necessary to get the test configuration (testAccAWSClusterConfig_EngineVersionWithPrimaryInstance) working where engine_version is actually set, the correct behavior is actually to leave it as ForceNew: true as its not possible to update the engine_version on its own and the correct configuration is to omit engine_version.

The challenge here is that the resource itself cannot be configured to support updates of the engine_version on its own. This can been seen by creating an acceptance test and configuration like the below:

func TestAccAWSRDSClusterInstance_EngineVersion(t *testing.T) {
	var dbInstance rds.DBInstance
	rName := acctest.RandomWithPrefix("tf-acc-test")
	resourceName := "aws_rds_cluster_instance.test"

	resource.Test(t, resource.TestCase{
		PreCheck:     func() { testAccPreCheck(t) },
		Providers:    testAccProviders,
		CheckDestroy: testAccCheckAWSClusterDestroy,
		Steps: []resource.TestStep{
			{
				Config: testAccAWSRDSClusterInstanceConfig_EngineVersion(rName, "9.6.3"),
				Check: resource.ComposeTestCheckFunc(
					testAccCheckAWSClusterInstanceExists(resourceName, &dbInstance),
					resource.TestCheckResourceAttr(resourceName, "engine_version", "9.6.3"),
				),
			},
			{
				ResourceName:            resourceName,
				ImportState:             true,
				ImportStateVerify:       true,
				ImportStateVerifyIgnore: []string{"apply_immediately"},
			},
			{
				Config: testAccAWSRDSClusterInstanceConfig_EngineVersion(rName, "9.6.6"),
				Check: resource.ComposeTestCheckFunc(
					testAccCheckAWSClusterInstanceExists(resourceName, &dbInstance),
					resource.TestCheckResourceAttr(resourceName, "engine_version", "9.6.6"),
				),
			},
		},
	})
}

func testAccAWSRDSClusterInstanceConfig_EngineVersion(rName, engineVersion string) string {
	return fmt.Sprintf(`
resource "aws_rds_cluster" "test" {
  cluster_identifier  = %q
  engine              = "aurora-postgresql"
  engine_version      = "9.6.3"
  master_username     = "foo"
  master_password     = "mustbeeightcharaters"
  skip_final_snapshot = true

  lifecycle {
    ignore_changes = ["engine_version"]
  }
}

resource "aws_rds_cluster_instance" "test" {
  apply_immediately       = true
  cluster_identifier      = "${aws_rds_cluster.test.id}"
  engine                  = "${aws_rds_cluster.test.engine}"
  engine_version          = %q
  identifier              = %q
  instance_class          = "db.r4.large"
}
`, rName, engineVersion, rName)
}

In this setup without code changes to this pull request, it will create a perpetual difference of trying to perform engine_version updates since nothing is actually triggering the engine version update in the Update function:

--- FAIL: TestAccAWSRDSClusterInstance_EngineVersion (695.08s)
    testing.go:527: Step 2 error: Check failed: Check 2/2 error: aws_rds_cluster_instance.test: Attribute 'engine_version' expected "9.6.6", got "9.6.3"

If we add the following code within the Update function of resource:

func resourceAwsRDSClusterInstanceUpdate(d *schema.ResourceData, meta interface{}) error {
// ...
	if d.HasChange("engine_version") {
		req.EngineVersion = aws.String(d.Get("engine_version").(string))
		requestUpdate = true
	}

We then receive an error attempting the update:

--- FAIL: TestAccAWSRDSClusterInstance_EngineVersion (684.60s)
    testing.go:527: Step 2 error: Error applying: 1 error occurred:
        	* aws_rds_cluster_instance.test: 1 error occurred:
        	* aws_rds_cluster_instance.test: Error modifying DB Instance tf-acc-test-4654433485176023154: InvalidParameterCombination: The specified DB Instance is a member of a cluster. Modify the DB engine version for the DB Cluster using the ModifyDbCluster API

To verify that the engine_version parameter should not be configurable at all, we can then just remove the first two TestStep to attempt to create a 9.6.6 instance on a 9.6.3 cluster:

func TestAccAWSRDSClusterInstance_EngineVersion(t *testing.T) {
	var dbInstance rds.DBInstance
	rName := acctest.RandomWithPrefix("tf-acc-test")
	resourceName := "aws_rds_cluster_instance.test"

	resource.Test(t, resource.TestCase{
		PreCheck:     func() { testAccPreCheck(t) },
		Providers:    testAccProviders,
		CheckDestroy: testAccCheckAWSClusterDestroy,
		Steps: []resource.TestStep{
			{
				Config: testAccAWSRDSClusterInstanceConfig_EngineVersion(rName, "9.6.6"),
				Check: resource.ComposeTestCheckFunc(
					testAccCheckAWSClusterInstanceExists(resourceName, &dbInstance),
					resource.TestCheckResourceAttr(resourceName, "engine_version", "9.6.6"),
				),
			},
		},
	})
}

This also returns an error:

--- FAIL: TestAccAWSRDSClusterInstance_EngineVersion (114.07s)
    testing.go:527: Step 0 error: Error applying: 1 error occurred:
        	* aws_rds_cluster_instance.test: 1 error occurred:
        	* aws_rds_cluster_instance.test: error creating RDS DB Instance: InvalidParameterCombination: The engine version that you requested for your DB instance (9.6.6) does not match the engine version of your DB cluster (9.6.3).

I will agree that I think we should perform deprecation for configuring both engine and engine_version attributes within the aws_rds_cluster_instance resource.

identifier = "${var.cluster_identifier}"
cluster_identifier = "${aws_rds_cluster.test.cluster_identifier}"
engine = "${var.engine}"
engine_version = "${var.engine_version}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As noted above, configuring engine_version in the aws_rds_cluster_instance resource is extraneous and will cause problems performing updates. Instead, it is better to omit the Optional argument and it may soon be deprecated. I will update this on merge. 👍

@bflad bflad added this to the v1.40.0 milestone Oct 5, 2018
@bflad bflad merged commit 3a14da7 into hashicorp:master Oct 5, 2018
bflad added a commit that referenced this pull request Oct 5, 2018
@bflad
Copy link
Contributor

bflad commented Oct 10, 2018

This has been released in version 1.40.0 of the AWS provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.

@phani308
Copy link

Does terraform aws provider 1.40 supports major version upgrade (to upgrade the engine_version from 9.6.8 to 10.4) and not destroy the instance?

@tfitch
Copy link

tfitch commented Oct 23, 2018

No, but to be fair in my experience you can't do upgrade from 9.6.x to 10.4 from the AWS Console either.

@phani308
Copy link

so below is the documentation from AWS on recent release of upgrading from 9.6.* to 10.4 .

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html

@phani308
Copy link

just to make things clear are you saying major version upgrade is not possible for RDS postgresql or Aurora postgresql? @tfitch

@ghost
Copy link

ghost commented Apr 2, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Apr 2, 2020
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
enhancement Requests to existing resources that expand the functionality or scope. service/rds Issues and PRs that pertain to the rds service. size/M Managed by automation to categorize the size of a PR.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants