Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RDS encrypted Snapshot restore uses snapshot's kms key #6063

Closed
ghost opened this issue Oct 3, 2018 · 6 comments
Closed

RDS encrypted Snapshot restore uses snapshot's kms key #6063

ghost opened this issue Oct 3, 2018 · 6 comments
Labels
service/rds Issues and PRs that pertain to the rds service.

Comments

@ghost
Copy link

ghost commented Oct 3, 2018

This issue was originally opened by @nomeelnoj as hashicorp/terraform#18984. It was migrated here as a result of the provider split. The original body of the issue is below.


Terraform Version

Terraform v0.11.8

Terraform Configuration Files

resource "aws_kms_key" "kms" {
  description  = "rds-tf-encrypt/${var.env_prefix}"
}

resource "random_id" "db_snapshot_suffix" {
  keepers = {
    rds_snapshot = "${var.snapshot_identifier}"
  }

  byte_length = 8
}

# Create RDS instance
resource "aws_db_instance" "rds" {
  identifier                = "${var.name_prefix}-${var.env_prefix}${var.version_prefix}"
  allocated_storage         = "${var.storage}"
  engine                    = "${var.engine}"
  engine_version            = "${lookup(var.engine_version, var.engine)}"
  instance_class            = "${var.instance_class}"
  publicly_accessible       = "${var.publicly_accessible}"
  backup_retention_period   = "${var.backup_retention_period}"
  apply_immediately         = "${var.apply_immediately}"
  multi_az                  = "${var.multi_az}"
  storage_type              = "${var.storage_type}"
  storage_encrypted         = true
  kms_key_id                = "${aws_kms_key.kms.arn}"
  final_snapshot_identifier = "${var.final_snapshot_identifier}-${var.env_prefix}-${random_id.db_snapshot_suffix.hex}"
  skip_final_snapshot       = "${var.env_prefix == "prd" ? false : true}"
  name                      = "${var.db_name}"
  username                  = "${var.username}"
  password                  = "${var.password}"
  vpc_security_group_ids    = ["${var.vpc_security_group_ids}"]
  db_subnet_group_name      = "${aws_db_subnet_group.default.name}"
  parameter_group_name      = "${var.parameter_group_name}"
  monitoring_interval       = "${var.monitoring_interval}"
  monitoring_role_arn       = "${var.monitoring_role_arn}"
  snapshot_identifier       = "${var.snapshot_identifier}"

Expected Behavior

When running the TF above, the new database should be encrypted with the KMS key provided, not the key from the snapshot.

Actual Behavior

The new RDS instance is created using the snapshot's KMS key for encryption

Steps to Reproduce

Additional Context

To update an RDS encryption key, you can create a copy of the snapshot and change the key for the copy. Couldn't you update TF to first copy the snapshot and apply the new key to the snapshot and then restore from the snapshot if both kms_key_id and snapshot_identifier are provided?

@bflad bflad added the service/rds Issues and PRs that pertain to the rds service. label Oct 3, 2018
@bflad bflad added this to the v1.39.0 milestone Oct 3, 2018
@bflad
Copy link
Contributor

bflad commented Oct 3, 2018

Hi @nomeelnoj 👋 Sorry you ran into trouble here. Your timing is actually pretty good though as #6012 was recently merged to fix this exact issue and it was just released in version 1.39.0 of the AWS provider this afternoon. 👍

@bflad bflad closed this as completed Oct 3, 2018
@nomeelnoj
Copy link

Looks like this was integrated for aws_db_cluster, but not aws_db_instance. Is the functionality for db_instance expected soon as well?

@bflad bflad removed this from the v1.39.0 milestone Oct 17, 2018
@bflad
Copy link
Contributor

bflad commented Oct 17, 2018

Ah, good catch, @nomeelnoj 😅 you are correct. Sorry about that - reopening. The API does not support KmsKeyId directly with RestoreDBInstanceFromDBSnapshot so this is indeed more complicated than working with Aurora clusters. Regardless of what we do here, I would suggest opening a support case with AWS to see if they might implement that into the API directly.

As for the current situation, I'm not sure we would accept having the aws_db_instance resource create an "unmanaged" or temporary snapshot with the specific configuration. I think we would prefer a new aws_db_snapshot_copy or similar resource, that managed the lifecycle of a snapshot copy separately. That new resource would also allow for copying between regions, which is nice.

@bflad bflad reopened this Oct 17, 2018
@nomeelnoj
Copy link

Could it be configured to automatically create a copy of the snapshot, assign the new KMS key, then restore from that copy, then delete that copy? Or is that too much going on behind the scenes?

@aeschright aeschright added the needs-triage Waiting for first response or review from a maintainer. label Jun 24, 2019
@bflad
Copy link
Contributor

bflad commented Nov 5, 2019

Hi again 👋 Sorry for the delayed response.

Could it be configured to automatically create a copy of the snapshot, assign the new KMS key, then restore from that copy, then delete that copy? Or is that too much going on behind the scenes?

In general, yes, we strongly prefer that Terraform resources only manage one piece of infrastructure (even temporarily). Operators have expectations that no other infrastructure is being created outside of what is declared in their Terraform configuration. If any part of the temporary processing failed, Terraform would have no method for tracking or handling the temporary infrastructure left behind.

The good news here is that it looks like a community member has contributed what we would expect the solution to look like here in a new aws_db_snapshot_copy resource: #9885 and an initial pull request #9886

Since that issue and pull request follow our recommended path forward here, I'm going to opt to close this issue so we can consolidate discussions and efforts in those. Thanks again for submitting this.

@bflad bflad closed this as completed Nov 5, 2019
@ghost
Copy link
Author

ghost commented Mar 29, 2020

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

@ghost ghost locked and limited conversation to collaborators Mar 29, 2020
@breathingdust breathingdust removed the needs-triage Waiting for first response or review from a maintainer. label Sep 17, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
service/rds Issues and PRs that pertain to the rds service.
Projects
None yet
Development

No branches or pull requests

4 participants