Skip to content

Commit

Permalink
Added cleanup and perform backup files
Browse files Browse the repository at this point in the history
  • Loading branch information
CarmiWisemon committed Mar 14, 2024
1 parent e4515fa commit c89af95
Show file tree
Hide file tree
Showing 3 changed files with 278 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -33,4 +33,11 @@ include::modules/installing-oadp-aws-sts.adoc[leveloffset=+1]
.Additional resources

* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/4.15/html/operators/user-tasks#olm-installing-from-operatorhub-using-web-console_olm-installing-operators-in-namespace[Installing from OperatorHub using the web console].
* link:https://docs.openshift.com/container-platform/4.15/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.html[Backing up applications]
* link:https://docs.openshift.com/container-platform/4.15/backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.html[Backing up applications]
[id="oadp-aws-sts-backing-up-and-cleaning"]
== Example: Backing up workload on OADP AWS STS, with an optional cleanup

include::modules/performing-a-backup-oadp-aws-sts.adoc[leveloffset=+2]

include::modules/cleanup-a-backup-oadp-aws-sts.adoc[leveloffset=+2]
104 changes: 104 additions & 0 deletions modules/cleanup-a-backup-oadp-aws-sts.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,104 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/oadp-aws/oadp-aws-backing-up-applications.adoc

:_mod-docs-content-type: PROCEDURE
[id="cleanup-a-backup-oadp-aws-sts_{context}"]
= Cleaning up a cluster after a backup with OADP and AWS STS

If you need to uninstall the {oadp-first} Operator together with the backups and the S3 bucket from this example, follow these instructions.

.Procedure

. Delete the workload by running the following command:
+
[source,terminal]
----
$ oc delete ns hello-world
----

. Delete the Data Protection Application (DPA) by running the following command:
+
[source,terminal]
----
$ oc -n openshift-adp delete dpa ${CLUSTER_NAME}-dpa
----

. Delete the cloud storage by running the following command:
+
[source,terminal]
----
$ oc -n openshift-adp delete cloudstorage ${CLUSTER_NAME}-oadp
----

+
[WARNING]
====
If this command hangs, you might need to delete the finalizer by running the following command:
[source,terminal]
----
$ oc -n openshift-adp patch cloudstorage ${CLUSTER_NAME}-oadp -p '{"metadata":{"finalizers":null}}' --type=merge
----
====

. If the Operator is no longer required, remove it by running the following command:
+
[source,terminal]
----
$ oc -n openshift-adp delete subscription oadp-operator
----

. Remove the namespace from the Operator:
+
[source,terminal]
----
$ oc delete ns openshift-adp
----

. If the backup and restore resources are no longer required, remove them from the cluster by running the following command:
+
[source,terminal]
----
$ oc delete backup hello-world
----

. To delete backup, restore and remote objects in {aws-short} S3 run the following command:
+
[source,terminal]
----
$ velero backup delete hello-world
----

. If you no longer need the Custom Resource Definitions (CRD), remove them from the cluster by running the following command:
+
[source,terminal]
----
$ for CRD in `oc get crds | grep velero | awk '{print $1}'`; do oc delete crd $CRD; done
----

. Delete the {aws-short} S3 bucket by running the following commands:
+
[source,terminal]
----
$ aws s3 rm s3://${CLUSTER_NAME}-oadp --recursive
----
+
[source,terminal]
----
$ aws s3api delete-bucket --bucket ${CLUSTER_NAME}-oadp
----

. Detach the policy from the role by running the following command:
+
[source,terminal]
----
$ aws iam detach-role-policy --role-name "${ROLE_NAME}" --policy-arn "${POLICY_ARN}"
----

. Delete the role by running the following command:
+
[source,terminal]
----
$ aws iam delete-role --role-name "${ROLE_NAME}"
----
166 changes: 166 additions & 0 deletions modules/performing-a-backup-oadp-aws-sts.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,166 @@
// Module included in the following assemblies:
//
// * backup_and_restore/application_backup_and_restore/oadp-aws/oadp-aws-backing-up-applications.adoc

:_mod-docs-content-type: PROCEDURE
[id="performing-a-backup-oadp-aws-sts_{context}"]
= Performing a backup with OADP and AWS STS

The following example `hello-world` application has no persistent volumes (PVs) attached. Perform a backup with {oadp-first} with AWS Security Token Service (AWS STS).

Either Data Protection Application (DPA) configuration will work.

. Create a workload to back up by running the following commands:
+
[source,terminal]
----
$ oc create namespace hello-world
----
+
[source,terminal]
----
$ oc new-app -n hello-world --image=docker.io/openshift/hello-openshift
----

. Expose the route by running the following command:
+
[source,terminal]
----
$ oc expose service/hello-openshift -n hello-world
----

. Check that the application is working by running the following command:
+
[source,terminal]
----
$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
----
+
.Example output
[source,terminal]
----
Hello OpenShift!
----


. Back up the workload by running the following command:
+
[source,terminal]
----
$ cat << EOF | oc create -f -
apiVersion: velero.io/v1
kind: Backup
metadata:
name: hello-world
namespace: openshift-adp
spec:
includedNamespaces:
- hello-world
storageLocation: ${CLUSTER_NAME}-dpa-1
ttl: 720h0m0s
EOF
----

. Wait until the backup is completed and then run the following command:
+
[source,terminal]
----
$ watch "oc -n openshift-adp get backup hello-world -o json | jq .status"
----
+
.Example output
+
[source,json]
----
{
"completionTimestamp": "2022-09-07T22:20:44Z",
"expiration": "2022-10-07T22:20:22Z",
"formatVersion": "1.1.0",
"phase": "Completed",
"progress": {
"itemsBackedUp": 58,
"totalItems": 58
},
"startTimestamp": "2022-09-07T22:20:22Z",
"version": 1
}
----

. Delete the demo workload by running the following command:
+
[source,terminal]
----
$ oc delete ns hello-world
----

. Restore the workload from the backup by running the following command:
+
[source,terminal]
----
$ cat << EOF | oc create -f -
apiVersion: velero.io/v1
kind: Restore
metadata:
name: hello-world
namespace: openshift-adp
spec:
backupName: hello-world
EOF
----

. Wait for the Restore to finish by running the following command:
+
[source,terminal]
----
$ watch "oc -n openshift-adp get restore hello-world -o json | jq .status"
----
+
.Example output
+
[source,json]
----
{
"completionTimestamp": "2022-09-07T22:25:47Z",
"phase": "Completed",
"progress": {
"itemsRestored": 38,
"totalItems": 38
},
"startTimestamp": "2022-09-07T22:25:28Z",
"warnings": 9
}
----

. Check that the workload is restored by running the following command:
+
[source,terminal]
----
$ oc -n hello-world get pods
----
+
.Example output
+
[source,terminal]
----
NAME READY STATUS RESTARTS AGE
hello-openshift-9f885f7c6-kdjpj 1/1 Running 0 90s
----
. Check the JSONPath by running the following command:
+
[source,terminal]
----
$ curl `oc get route/hello-openshift -n hello-world -o jsonpath='{.spec.host}'`
----
+
.Example output
+
[source,terminal]
----
Hello OpenShift!
----

[NOTE]
====
For troubleshooting tips, see the OADP team’s link:https://access.redhat.com/articles/5456281[troubleshooting documentation].
====

0 comments on commit c89af95

Please sign in to comment.