Skip to content

Commit

Permalink
fix(backup): backups are not deleted when retained nr of backups >= 20 (
Browse files Browse the repository at this point in the history
#566)

Co-authored-by: Awais Malik <awmalik@google.com>
  • Loading branch information
tjespers and g-awmalik committed Feb 8, 2024
1 parent c7ab6ec commit 6c4b0e3
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 3 deletions.
1 change: 1 addition & 0 deletions modules/backup/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,7 @@ fetch workflows.googleapis.com/Workflow
| Name | Description | Type | Default | Required |
|------|-------------|------|---------|:--------:|
| backup\_retention\_time | The number of days backups should be kept | `number` | `30` | no |
| backup\_runs\_list\_max\_results | The max amount of backups to list when fetching internal backup runs for the instance. This number must be larger then the amount of backups you wish to keep. E.g. for a daily backup schedule and a backup\_retention\_time of 30 days, you'd need to set this to at least 31 for old backups to get deleted. | `number` | `31` | no |
| backup\_schedule | The cron schedule to execute the internal backup | `string` | `"45 2 * * *"` | no |
| compress\_export | Whether or not to compress the export when storing in the bucket; Only valid for MySQL and PostgreSQL | `bool` | `true` | no |
| connector\_params\_timeout | The end-to-end duration the connector call is allowed to run for before throwing a timeout exception. The default value is 1800 and this should be the maximum for connector methods that are not long-running operations. Otherwise, for long-running operations, the maximum timeout for a connector call is 31536000 seconds (one year). | `number` | `1800` | no |
Expand Down
7 changes: 4 additions & 3 deletions modules/backup/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -65,9 +65,10 @@ resource "google_workflows_workflow" "sql_backup" {
project = var.project_id
service_account = local.service_account
source_contents = templatefile("${path.module}/templates/backup.yaml.tftpl", {
project = var.project_id
instanceName = var.sql_instance
backupRetentionTime = var.backup_retention_time
project = var.project_id
instanceName = var.sql_instance
backupRetentionTime = var.backup_retention_time
backupRunsListMaxResults = var.backup_runs_list_max_results
})
}

Expand Down
1 change: 1 addition & 0 deletions modules/backup/templates/backup.yaml.tftpl
Original file line number Diff line number Diff line change
Expand Up @@ -36,6 +36,7 @@ main:
args:
project: ${project}
instance: ${instanceName}
maxResults: ${backupRunsListMaxResults}
result: backupList
- delete old backups:
for:
Expand Down
6 changes: 6 additions & 0 deletions modules/backup/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,12 @@ variable "backup_retention_time" {
default = 30
}

variable "backup_runs_list_max_results" {
description = "The max amount of backups to list when fetching internal backup runs for the instance. This number must be larger then the amount of backups you wish to keep. E.g. for a daily backup schedule and a backup_retention_time of 30 days, you'd need to set this to at least 31 for old backups to get deleted."
type = number
default = 31
}

variable "scheduler_timezone" {
description = "The Timezone in which the Scheduler Jobs are triggered"
type = string
Expand Down

0 comments on commit 6c4b0e3

Please sign in to comment.