diff --git a/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc b/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc index 6890d799d426..f26b73abd63d 100644 --- a/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc +++ b/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc @@ -19,17 +19,43 @@ If your cloud provider does not support snapshots or if your applications are on You can create xref:../../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc#oadp-creating-backup-hooks_backing-up-applications[backup hooks] to run commands before or after the backup operation. -You can schedule backups by creating a xref:../../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc#oadp-scheduling-backups_backing-up-applications[`Schedule` CR] instead of a `Backup` CR. +You can schedule backups by creating a xref:../../../backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc#oadp-scheduling-backups_backing-up-applications[Schedule CR] instead of a `Backup` CR. include::modules/oadp-creating-backup-cr.adoc[leveloffset=+1] include::modules/oadp-backing-up-pvs-csi.adoc[leveloffset=+1] include::modules/oadp-backing-up-applications-restic.adoc[leveloffset=+1] include::modules/oadp-using-data-mover-for-csi-snapshots.adoc[leveloffset=+1] + +[id="oadp-12-data-mover-ceph"] +== Using OADP 1.2 Data Mover with Ceph storage + +You can use OADP 1.2 Data Mover to backup and restore application data for a cluster that uses CephFS, for a cluster that uses CephRBD, and for a cluster that uses both. + +OADP 1.2 Data Mover leverages some of the recently added features of Ceph that support large-scale environments, one of which is the shallow copy method, which is available for {product-name} 4.12 and later. This feature supports backing up and restoring `StorageClass` and `AccessMode` resources other than what is found on the source PVC. + +This document has the following sections: + +* Prerequisites +* Preparing Ceph `VolumeSnapshotClass` and `StorageClass` CRs for use with OADP 1.2 Data Mover +* Backing up and restoring data using CephFS ShallowCopy +* Backing up and restoring data with split volumes: CephFS and CephRBD + +//include::modules/oadp-ceph-prerequisites.adoc[leveloffset=+1] +//include::modules/oadp-ceph-preparing-crs.adoc[leveloffset=+1] +//include::modules/oadp-ceph-cephfs.adoc[leveloffset=+1] +//include::modules/oadp-ceph-split.adoc[leveloffset=+1] + +include::modules/oadp-ceph-prerequisites.adoc[leveloffset=+1] +include::modules/oadp-ceph-preparing-crs.adoc[leveloffset=+1] +include::modules/oadp-ceph-cephfs.adoc[leveloffset=+1] +include::modules/oadp-ceph-split.adoc[leveloffset=+1] + + [id="oadp-cleaning-up-after-data-mover-1-1-backup"] -== Cleaning up after a backup using Data Mover with OADP 1.1. +== Cleaning up after a backup using OADP 1.1 Data Mover -For OADP 1.1., you must perform a data cleanup after you perform a backup using any version of Data Mover. +For OADP 1.1 Data Mover, you must perform a data cleanup after you perform a backup. The cleanup consists of deleting the following resources: @@ -42,7 +68,7 @@ include::modules/oadp-cleaning-up-after-data-mover-snapshots.adoc[leveloffset=+2 [id="deleting-cluster-resources"] === Deleting cluster resources -Data Mover might leave cluster resources whether or not it successfully backs up your container storage interface (CSI) volume snapshots to a remote object store. +OADP 1.1 Data Mover might leave cluster resources whether or not it successfully backs up your container storage interface (CSI) volume snapshots to a remote object store. include::modules/oadp-deleting-cluster-resources-following-success.adoc[leveloffset=+3] include::modules/oadp-deleting-cluster-resources-following-failure.adoc[leveloffset=+3] @@ -60,4 +86,4 @@ include::modules/oadp-deleting-backups.adoc[leveloffset=+1] [role="_additional-resources"] .Additional resources -* xref:../../../backup_and_restore/application_backup_and_restore/troubleshooting.adoc#velero-obtaining-by-downloading_oadp-troubleshooting[Downloading the Velero CLI tool] \ No newline at end of file +* xref:../../../backup_and_restore/application_backup_and_restore/troubleshooting.adoc#velero-obtaining-by-downloading_oadp-troubleshooting[Downloading the Velero CLI tool] diff --git a/modules/oadp-ceph-cephfs.adoc b/modules/oadp-ceph-cephfs.adoc new file mode 100644 index 000000000000..b1ebd7f62e71 --- /dev/null +++ b/modules/oadp-ceph-cephfs.adoc @@ -0,0 +1,167 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc + +:_content-type: PROCEDURE +[id="oadp-ceph-preparing-crs_{context}"] += Backing up and restoring data using OADP 1.2 Data Mover and CephFS storage + +You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data using CephFS storage by enabling the shallow copy feature of CephFS. + +// [IMPORTANT] +// ==== +// The CephFS shallow copy feature can only be used for Data Mover backup operations. The shallow copy volume options are not supported for restore. +// ==== + +.Prerequisites + +* You have ensured that a stateful application is running in a separate namespace with PVCs using CephFS as the provisioner. +* The default `StorageClass` and `VolumeSnapshotClass` CRs are defined for cephFS and OADP 1.2 Data Mover + +Procedure + +. Verify that `deletionPolicy` of the `VolumeSnapshotClass` CR is set to `Retain`: ++ +[source,terminal] +---- +$ oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"Retention Policy: "}{.deletionPolicy}{"\n"}{end}' +---- + +. Verify that the `VolumeSnapshotClass` CR labels are set to `true`: ++ +[source,terminal] +---- +$ oc get volumesnapshotclass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"labels: "}{.metadata.labels}{"\n"}{end}' +---- + +. Verify that the `StorageClass` annotations are as expected: ++ +[source,terminal] +---- +$ oc get storageClass -A -o jsonpath='{range .items[*]}{"Name: "}{.metadata.name}{" "}{"annotations: "}{.metadata.annotations}{"\n"}{end}' +---- + +. Create a Data Protection Application (DPA) CR similar to one below. Use the Restic `Secret` you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph as the value for `spec.features.dataMover.credentialName`. If you do not, then the CR will use the default value `dm-credential` for this parameter. Note that there is no default value for `enable` label. Each one can have a value of `true` or `false`. ++ +Example DPA CR ++ +[source,yaml] +---- +apiVersion: oadp.openshift.io/v1alpha1 +kind: DataProtectionApplication +metadata: + name: velero-sample + namespace: openshift-adp +spec: + backupLocations: + - velero: + config: + profile: default + region: us-east-1 + credential: + key: cloud + name: cloud-credentials + default: true + objectStorage: + bucket: + prefix: velero + provider: aws + configuration: + restic: + enable: false + velero: + defaultPlugins: + - openshift + - aws + - csi + - vsm + features: + dataMover: + credentialName: + enable: true + volumeOptionsForStorageClasses: + ocs-storagecluster-cephfs: + sourceVolumeOptions: + accessMode: ReadOnlyMany + cacheAccessMode: ReadWriteMany + cacheStorageClassName: ocs-storagecluster-cephfs + storageClassName: ocs-storagecluster-cephfs-shallow +---- + +. To back up data: + +.. Create a `Backup` CR: ++ +Example `Backup` CR ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Backup +metadata: + name: + namespace: +spec: + includedNamespaces: +- +storageLocation: velero-sample-1 +---- + +.. Monitor the Data Mover backup and its artifacts by running the script named link:https://github.com/openshift/oadp-operator/blob/master/docs/examples/datamover_resources.sh[Volumemover live debug] +.. To check the progress of all the `VolumeSnapshotBackup` CRs, run the following command: ++ +[source, terminal] +---- +$ oc get vsb -n +---- + +.. To check the progress of a specific `VolumeSnapshotBackup` CR, run the following command: ++ +[source,terminal] +---- +$ oc get vsb -n -ojsonpath="{.status.phase}` +---- + +.. Wait several minutes and check if the `VolumeSnapshotBackup` CR has the status `Completed`. +.. Verify that is at least one snapshot in the object store that is given in the Restic `secret`. You can check for this snapshot in your targeted `BackupStorageLocation` that has a prefix of `/`. + +. To restore data: + +.. Verify that the application namespace is deleted, as well as any `VolumeSnapshotContent` CRs that were created during backup. + +.. Create a `Restore` CR: ++ +Example `Restore` CR ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Restore +metadata: + name: + namespace: +spec: + backupName: +---- + +.. Monitor the Data Mover backup and its artifacts by running the script named link:https://github.com/openshift/oadp-operator/blob/master/docs/examples/datamover_resources.sh[Volumemover live debug] +.. To check the progress of all the `VolumeSnapshotRestore` CRs, run the following command: ++ +[source, terminal] +---- +$ oc get vsr -n +---- + +.. To check the progress of a specific `VolumeSnapshotRestore` CR, run the following command: ++ +[source,terminal] +---- +$ oc get vsr -n -ojsonpath="{.status.phase} +---- + +.. Verify that your application data has been restored: ++ +[source,terminal] +---- +$ oc get route -n -ojsonpath="{.spec.host}" +---- diff --git a/modules/oadp-ceph-preparing-crs.adoc b/modules/oadp-ceph-preparing-crs.adoc new file mode 100644 index 000000000000..ed21a9311875 --- /dev/null +++ b/modules/oadp-ceph-preparing-crs.adoc @@ -0,0 +1,166 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc + +:_content-type: PROCEDURE +[id="oadp-ceph-preparing-crs_{context}"] += Defining Ceph VolumeSnapshotClass and StorageClass CRs for use with OADP 1.2 Data Mover + +When you install {rh-storage-first}, it automatically creates default `StorageClass` and `VolumeSnapshotClass` custom resources (CRs). You must define these CRs for use with OpenShift API for Data Protection (OADP) 1.2 Data Mover. + +After you make these changes, you must make several other changes to your environment before you can perform your back up and restore operations. + +.Procedure + +. If you are using CephFS storage: + +.. In your `VolumeSnapshotClass` CR, set `deletionPolicy` to `Retain`, `annotations` to `true`, and the `velero.io/csi-volumesnapshot-class` label to 'true` as in the following example: ++ +Example `VolumeSnapshotClass` CR: ++ +[source,yaml] +---- +apiVersion: snapshot.storage.k8s.io/v1 +deletionPolicy: Retain +driver: openshift-storage.cephfs.csi.ceph.com +kind: VolumeSnapshotClass +metadata: + annotations: + snapshot.storage.kubernetes.io/is-default-class: true + labels: + velero.io/csi-volumesnapshot-class: true + name: ocs-storagecluster-cephfsplugin-snapclass +parameters: + clusterID: openshift-storage + csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner + csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage +---- + +.. In your `StorageClass` CR, set the `annotations` as in the following example: ++ +Example `StorageClass` CR ++ +[source,yaml] +---- +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: ocs-storagecluster-cephfs + annotations: + description: Provides RWO and RWX Filesystem volumes + storageclass.kubernetes.io/is-default-class: true +provisioner: openshift-storage.cephfs.csi.ceph.com +parameters: + clusterID: openshift-storage + csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner + csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage + csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node + csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage + csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner + csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage + fsName: ocs-storagecluster-cephfilesystem +reclaimPolicy: Delete +allowVolumeExpansion: true +volumeBindingMode: Immediate +---- + +. If you are using CephRBD storage: + +.. In your `VolumeSnapshotClass` CR, set `deletionPolicy` to `Retain` and the `velero.io/csi-volumesnapshot-class` label` to `true`, as in the following example: ++ +Example `VolumeSnapshotClass` CR: ++ +[source,yaml] +---- +apiVersion: snapshot.storage.k8s.io/v1 +deletionPolicy: Retain +driver: openshift-storage.rbd.csi.ceph.com +kind: VolumeSnapshotClass +metadata: + labels: + velero.io/csi-volumesnapshot-class: true + name: ocs-storagecluster-rbdplugin-snapclass +parameters: + clusterID: openshift-storage + csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner + csi.storage.k8s.io/snapshotter-secret-namespace: openshift-storage +---- + +.. In your `StorageClass` CR, set the `annotations` as in the following example: ++ +Example `StorageClass` CR ++ +[source,yaml] +---- +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: ocs-storagecluster-ceph-rbd + annotations: + description: 'Provides RWO Filesystem volumes, and RWO and RWX Block volumes' +provisioner: openshift-storage.rbd.csi.ceph.com +parameters: + csi.storage.k8s.io/fstype: ext4 + csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage + csi.storage.k8s.io/provisioner-secret-name: rook-csi-rbd-provisioner + csi.storage.k8s.io/node-stage-secret-name: rook-csi-rbd-node + csi.storage.k8s.io/controller-expand-secret-name: rook-csi-rbd-provisioner + imageFormat: '2' + clusterID: openshift-storage + imageFeatures: layering + csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage + pool: ocs-storagecluster-cephblockpool + csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage +reclaimPolicy: Delete +allowVolumeExpansion: true +volumeBindingMode: Immediate +---- + +. In all cases, create a CephFS `StorageClass` CR to make use of the shallow copy feature. In this CR, set the `backingSnapshot` parameter set to `true` to make use of the feature: ++ +Example CephFS `StorageClass` CR with `backingSnapshot` set to `true` ++ +[source, yaml] +---- +kind: StorageClass +apiVersion: storage.k8s.io/v1 +metadata: + name: ocs-storagecluster-cephfs-shallow + annotations: + description: Provides RWO and RWX Filesystem volumes + storageclass.kubernetes.io/is-default-class: false +provisioner: openshift-storage.cephfs.csi.ceph.com +parameters: + csi.storage.k8s.io/provisioner-secret-namespace: openshift-storage + csi.storage.k8s.io/provisioner-secret-name: rook-csi-cephfs-provisioner + csi.storage.k8s.io/node-stage-secret-name: rook-csi-cephfs-node + csi.storage.k8s.io/controller-expand-secret-name: rook-csi-cephfs-provisioner + clusterID: openshift-storage + fsName: ocs-storagecluster-cephfilesystem + csi.storage.k8s.io/controller-expand-secret-namespace: openshift-storage + backingSnapshot: true + csi.storage.k8s.io/node-stage-secret-namespace: openshift-storage +reclaimPolicy: Delete +allowVolumeExpansion: true +volumeBindingMode: Immediate +---- ++ +[IMPORTANT] +==== +Use the same provisioner for your default `VolumeSnapshotClass` and `StorageClass`. +==== + +. Configure a Restic `Secret` CR because all OADP 1.2 Data Mover with Ceph scenarios use VolSync's Restic option: ++ +Example Restic `Secret` CR ++ +[source,yaml] +---- +apiVersion: v1 +kind: Secret +metadata: + name: +type: Opaque +stringData: + RESTIC_PASSWORD: +----- diff --git a/modules/oadp-ceph-prerequisites.adoc b/modules/oadp-ceph-prerequisites.adoc new file mode 100644 index 000000000000..dd9b646ba737 --- /dev/null +++ b/modules/oadp-ceph-prerequisites.adoc @@ -0,0 +1,16 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc + + +:_content-type: CONCEPT +[id="oadp-ceph-prerequisites_{context}"] += Prerequisites for backing up and restoring data using OADP 1.2 Data Mover in a cluster that uses Ceph storage + +The following prerequisites apply to all back up and restore operations of data using OpenShift API for Data Protection (OADP) 1.2 Data Mover in a cluster that uses Ceph storage: + +* You have installed {product-name} 4.11 or later. +* You have installed the OADP Operator. +* You have created a `Secret`. +* You have installed {rh-storage-first}. +* You have installed the latest VolSync Operator using the Operator Lifecycle Manager. diff --git a/modules/oadp-ceph-split.adoc b/modules/oadp-ceph-split.adoc new file mode 100644 index 000000000000..a610662dd552 --- /dev/null +++ b/modules/oadp-ceph-split.adoc @@ -0,0 +1,159 @@ +// Module included in the following assemblies: +// +// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc + +:_content-type: PROCEDURE +[id="oadp-ceph-split-crs_{context}"] += Backing up and restoring data using OADP 1.2 Data Mover and split volumes (CephFS and Ceph RBD) + +You can use OpenShift API for Data Protection (OADP) 1.2 Data Mover to back up and restore data in an environment that has _split volumes_, that is, an environment that uses both CephFS and CephRBD. + +// [IMPORTANT] +// ==== +// The CephFS shallow copy feature can only be used for Data Mover backup operations. The shallow copy volume options are not supported for restore. +// ==== + +.Prerequisites + +* Ensure a stateful application is running in a separate namespace with PVCs provisioned by both CephFS and CephRBD. + +* CephFS is the default `StorageClass` and `VolumeSnapshotClass`. + +Procedure + +. Create a Data Protection Application (DPA) CR similar to one below. Use the Restic `Secret` you created when you prepared your environment for working with OADP 1.2 Data Mover and Ceph as the value for `spec.features.dataMover.credentialName`. If you do not, then the CR will use the default value `dm-credential` for this parameter. ++ +[NOTE] +==== +`VolumeOptionsForStorageClass` can be defined for multiple `storageClasses`, thus allowing a backup to complete with volumes with different providers. +==== ++ +Example DPA CR for environment with split volumes ++ +[source,yaml] +---- +apiVersion: oadp.openshift.io/v1alpha1 +kind: DataProtectionApplication +metadata: + name: velero-sample + namespace: openshift-adp +spec: + backupLocations: + - velero: + config: + profile: default + region: us-east-1 + credential: + key: cloud + name: cloud-credentials + default: true + objectStorage: + bucket: + prefix: velero + provider: aws + configuration: + restic: + enable: false + velero: + defaultPlugins: + - openshift + - aws + - csi + - vsm + features: + dataMover: + credentialName: + enable: true + volumeOptionsForStorageClasses: + ocs-storagecluster-cephfs: + sourceVolumeOptions: + accessMode: ReadOnlyMany + cacheAccessMode: ReadWriteMany + cacheStorageClassName: ocs-storagecluster-cephfs + storageClassName: ocs-storagecluster-cephfs-shallow + ocs-storagecluster-ceph-rbd: + sourceVolumeOptions: + storageClassName: ocs-storagecluster-ceph-rbd + cacheStorageClassName: ocs-storagecluster-ceph-rbd + destinationVolumeOptions: + storageClassName: ocs-storagecluster-ceph-rbd + cacheStorageClassName: ocs-storagecluster-ceph-rbd +---- + +. To back up data: + +.. Create a `Backup` CR: ++ +Example `Backup` CR ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Backup +metadata: + name: + namespace: +spec: + includedNamespaces: +- +storageLocation: velero-sample-1 +---- + +.. Monitor the Data Mover backup and its artifacts by running the script named link:https://github.com/openshift/oadp-operator/blob/master/docs/examples/datamover_resources.sh[Volumemover live debug] +.. To check the progress of all the `VolumeSnapshotBackup` CRs, run the following command: ++ +[source, terminal] +---- +$ oc get vsb -n +---- + +.. To check the progress of a specific `VolumeSnapshotBackup` CR, run the following command: ++ +[source,terminal] +---- +$ oc get vsb -n -ojsonpath="{.status.phase}` +---- + +.. Wait several minutes and check if the `VolumeSnapshotBackup` CR has the status `Completed`. +.. Verify that is at least one snapshot in the object store that is given in the Restic `secret`. You can check for this snapshot in your targeted `BackupStorageLocation` that has a prefix of `/`. + +. To restore data: + +.. Verify that the application namespace is deleted, as well as any `VolumeSnapshotContent` CRs that were created during backup. + +.. Create a `Restore` CR: ++ +Example `Restore` CR ++ +[source,yaml] +---- +apiVersion: velero.io/v1 +kind: Restore +metadata: + name: + namespace: +spec: + backupName: +---- + +.. Monitor the Data Mover backup and its artifacts by running the script named link:https://github.com/openshift/oadp-operator/blob/master/docs/examples/datamover_resources.sh[Volumemover live debug] +.. To check the progress of all the `VolumeSnapshotRestore` CRs, run the following command: ++ +[source, terminal] +---- +$ oc get vsr -n +---- + +.. To check the progress of a specific `VolumeSnapshotRestore` CR, run the following command: ++ +[source,terminal] +---- +$ oc get vsr -n -ojsonpath="{.status.phase} +---- + +.. Verify that your application data has been restored: ++ +[source,terminal] +---- +$ oc get route -n -ojsonpath="{.spec.host}" +----