Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/snapshot reencryption #142

Merged
merged 17 commits into from
Apr 19, 2023
Merged

Conversation

tarunmenon95
Copy link
Contributor

Added feature to re-encrypt RDS instance and RDS Cluster snapshots when shelvery_reencrypt_kms_key_id is supplied.

RDS Cluster Example

  1. We first create an RDS Cluster with shelvery backups enabled.
  2. Then set the the shelvery_reencrypt_kms_key_id to the ARN of the new KMS key we want to re-encrypt our snapshots with aswell as shelvery_share_aws_account_ids with the account we are sharing with.
  3. Call shelvery rds_cluster create_backups
  4. Shelvery will now create the initial backup as expected, however when it detects the shelvery_reencrypt_kms_key_id parameter is set, it will then begin creating the new re-encrypted snapshot.
Re-encrypt KMS Key found, creating new backup with arn:aws:kms:....
Creating new encrypted backup shelvery-test-2023-04-18-0025-daily-re-encrypted

Note - Shelvery updates the tags of the new snapshot to match the new name

  • shelvery:name : shelvery-test-2023-04-18-0025-daily-re-encrypted
  1. Shelvery will wait till the new snapshot is created, then delete the old snapshot.
New encrypted backup shelvery-test-2023-04-18-0025-daily-re-encrypted created
Cleaning up un-encrypted backup: shelvery-test-2023-04-18-0025-daily
Shared backup shelvery-test-2023-04-18-0025-daily-re-encrypted (ap-southeast-2) with 123456789123
Wrote meta for backup shelvery-test-2023-04-18-0025-daily-re-encrypted of type rds_cluster to s3://shelvery.data.987654321987/backups/shared/12345678123/rds_cluster/shelvery-test-2023-04-18-0025-daily-re-encrypted.yaml
  1. Lastly, we call shelvery rds_cluster pull_shared_backups from our destination account, observe that Shelvery now creates the local manual snapshot in the destination account with the new re-encrypted snapshot.

Screen Shot 2023-04-18 at 10 38 55 am

RDS Instance Example

  1. The process is the same for RDS instances as we set the same env vars.
  2. We call shelvery rds create_backups
  3. Observe the re-encrypted snapshot is created as expected
New encrypted backup shelvery-test-instance-2023-04-18-0045-daily-re-encrypted created
Updating tags for new snapshot - shelvery-test-instance-2023-04-18-0045-daily-re-encrypted
Cleaning up un-encrypted backup: shelvery-test-instance-2023-04-18-0045-daily
Shared backup shelvery-test-instance-2023-04-18-0045-daily-re-encrypted (ap-southeast-2) with 123456789123
Wrote meta for backup shelvery-test-instance-2023-04-18-0045-daily-re-encrypted of type rds to s3://shelvery.data.987654321987-ap-southeast-2.base2tools/backups/shared/123456789123/rds/shelvery-test-instance-2023-04-18-0045-daily-re-encrypted.yaml
  1. We call shelvery rds pull_shared_backups and observe the manual snapshot is created in the destination account

Other resources

The code for re-encrypting RDS Instance and cluster snapshots should not interfere with other resource types even when the shelvery_reencrypt_kms_key_id as the only change to engine.py is to return the new backup id of the re-encrypted snapshot which only occurs for RDS resources.

 try:
            new_backup_id = self.share_backup_with_account(backup_region, backup_id, destination_account_id)
            #assign new backup id if new snapshot is created (eg: re-encrypted rds snapshot)
            backup_id = new_backup_id if new_backup_id else backup_id
            self.logger.info(f"Shared backup {backup_id} ({backup_region}) with {destination_account_id}")

@Guslington Guslington merged commit fe218f3 into develop Apr 19, 2023
0 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants