New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Multiple Backups in a Namespace #316
Comments
projectsyn/component-cluster-backup#1 asks for this feature. |
We'd also need this feature, as we have different schedules and also backends(!) for various components in the same namespace |
I recently faced a similar problem #648 As a solution, it is proposed to run all backups from the root user. Within the same namespace, different workloads can write under different UIDs. Therefore, we need the ability to run backup processes also with different UIDs. |
Is this planned at some point? |
This is the only thing preventing us from migrating from velero, is there an update on this? |
It would be really helpful to be able to annotate PVCs somehow with some kind of |
It would probably be best to have a label selector field to I looks like this issue was added to the k8up v3 milestone and also planned section of roadmap, however, I did some reading in the operator config docs and it does show a backup annotation to check in the help (and in the code): --annotation value the annotation to be used for filtering (default: "k8up.io/backup") [$BACKUP_ANNOTATION] If I'm understanding correctly, it should work if you use the tested configmap with env: apiVersion: v1
kind: ConfigMap
metadata:
name: remote-backup-env
data:
BACKUP_ANNOTATION: "k8up.io/remote-backup" tested backup example: apiVersion: k8up.io/v1
kind: Backup
metadata:
name: remote-only-backup
namespace: my-namespace
spec:
failedJobsHistoryLimit: 2
promURL: push-gateway.prometheus.svc:9091
successfulJobsHistoryLimit: 2
backend:
envFrom:
- configMapRef:
name: remote-backup-env
repoPasswordSecretRef:
key: resticRepoPassword
name: s3-credentials
s3:
accessKeyIDSecretRef:
key: accessKeyID
name: s3-credentials
optional: false
bucket: my-bucket
endpoint: mys3.endpoint.anonymized
secretAccessKeySecretRef:
key: secretAccessKey
name: s3-credentials
optional: false When I exec into the backup pod, and run BACKUP_ANNOTATION=k8up.io/remote-backup The way I annotated my PVCs before applying the backup is like so: kubectl annotate pvc my-not-ignored-pvc k8up.io/remote-backup='false'
kubectl annotate pvc my-pvc-i-want-to-backup-remotely k8up.io/remote-backup='true'
# to be sure, I also annotated my associated pods
kubectl annotate pod my-not-ignored-pod-8dsu1 k8up.io/remote-backup='false'
kubectl annotate pod my-pod-i-want-to-backup-remotely-fc6ve k8up.io/remote-backup='true' So based on all of that, I feel like this is partially implemented if we were ok with just using annotations, however, I'm unsure why I can't use a different backup annotation for different Backups/Schedules. If I'm doing something wrong, please let me know, otherwise perhaps someone needs to look at why BACKUP_ANNOTATION can't be set for specific Backups. |
Summary
As user of K8up
I want to be able to specify multiple different backups per namespace
So that I can have different settings for different kind of backups.
Context
To be able to specify different backups settings for different backup targets in the same namespace. For example a DB backup which runs every night and a PVC backup which runs every hour.
Out of Scope
Further links
Acceptance criteria
Given a namespace
When multiple backups are specified with selectors
Then they select the PVCs or Pods to back up
Implementation Ideas
Implement a Pod/PVC selector to make it possible to have multiple backup objects in a single namespace. Instead of selecting all PVCs with RWX and all Pods with an annotation, specify a selector for Pods and PVCs to be backed up. Also make sure naming doesn't collide for Prometheus metrics, Restic repos and backup names. By specifying an empty selector (select all), the old behavior can be maintained.
The text was updated successfully, but these errors were encountered: