Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
200 changes: 155 additions & 45 deletions modules/dr-hosted-cluster-within-aws-region-backup.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,43 +4,46 @@

:_mod-docs-content-type: PROCEDURE
[id="dr-hosted-cluster-within-aws-region-backup_{context}"]
= Backing up a hosted cluster
= Backing up a hosted cluster on {aws-short}

To recover your hosted cluster in your target management cluster, you first need to back up all of the relevant data.

.Procedure

. Create a configmap file to declare the source management cluster by entering this command:
. Create a config map file to declare the source management cluster by entering the following command:
+
[source,terminal]
----
$ oc create configmap mgmt-parent-cluster -n default \
--from-literal=from=${MGMT_CLUSTER_NAME}
----

. Shut down the reconciliation in the hosted cluster and in the node pools by entering these commands:
. Shut down the reconciliation in the hosted cluster and in the node pools by entering the following commands:
+
[source,terminal]
----
$ PAUSED_UNTIL="true"
$ oc patch -n ${HC_CLUSTER_NS} hostedclusters/${HC_CLUSTER_NAME} \
-p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
$ oc scale deployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 \
kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator
----
+
[source,terminal]
----
$ PAUSED_UNTIL="true"
$ oc patch -n ${HC_CLUSTER_NS} hostedclusters/${HC_CLUSTER_NAME} \
-p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
----
+
[source,terminal]
----
$ oc patch -n ${HC_CLUSTER_NS} nodepools/${NODEPOOLS} \
-p '{"spec":{"pausedUntil":"'${PAUSED_UNTIL}'"}}' --type=merge
----
+
[source,terminal]
----
$ oc scale deployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --replicas=0 \
kube-apiserver openshift-apiserver openshift-oauth-apiserver control-plane-operator
----

. Back up etcd and upload the data to an S3 bucket by running this bash script:
. Back up etcd and upload the data to an S3 bucket by running the following bash script:
+
[TIP]
====
Expand Down Expand Up @@ -93,82 +96,189 @@ For more information about backing up etcd, see "Backing up and restoring etcd o
* `MachineDeployments`, `MachineSets`, and `Machines` from the Hosted Control Plane namespace
* `ControlPlane` secrets from the Hosted Control Plane namespace
+
.. Enter the following commands:
+
[source,terminal]
----
$ mkdir -p ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS} \
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}
----
+
[source,terminal]
----
$ chmod 700 ${BACKUP_DIR}/namespaces/

# HostedCluster
----
+
.. Back up the `HostedCluster` objects from the `HostedCluster` namespace by entering the following commands:
+
[source,terminal]
----
$ echo "Backing Up HostedCluster Objects:"
$ oc get hc ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/hc-${HC_CLUSTER_NAME}.yaml
----
+
[source,terminal]
----
$ oc get hc ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS} -o yaml > \
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/hc-${HC_CLUSTER_NAME}.yaml
----
+
[source,terminal]
----
$ echo "--> HostedCluster"
$ sed -i '' -e '/^status:$/,$d' ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/hc-${HC_CLUSTER_NAME}.yaml

# NodePool
$ oc get np ${NODEPOOLS} -n ${HC_CLUSTER_NS} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/np-${NODEPOOLS}.yaml
----
+
[source,terminal]
----
$ sed -i '' -e '/^status:$/,$d' \
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/hc-${HC_CLUSTER_NAME}.yaml
----
+
.. Back up the `NodePool` objects from the `HostedCluster` namespace by entering the following commands:
+
[source,terminal]
----
$ oc get np ${NODEPOOLS} -n ${HC_CLUSTER_NS} -o yaml > \
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/np-${NODEPOOLS}.yaml
----
+
[source,terminal]
----
$ echo "--> NodePool"
$ sed -i '' -e '/^status:$/,$ d' ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/np-${NODEPOOLS}.yaml

# Secrets in the HC Namespace
----
+
[source,terminal]
----
$ sed -i '' -e '/^status:$/,$ d' \
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/np-${NODEPOOLS}.yaml
----
+
.. Back up the secrets in the `HostedCluster` namespace by running the following shell script:
+
[source,terminal]
----
$ echo "--> HostedCluster Secrets:"
for s in $(oc get secret -n ${HC_CLUSTER_NS} | grep "^${HC_CLUSTER_NAME}" | awk '{print $1}'); do
oc get secret -n ${HC_CLUSTER_NS} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}/secret-${s}.yaml
done

# Secrets in the HC Control Plane Namespace
----
+
.. Back up the secrets in the `HostedCluster` control plane namespace by running the following shell script:
+
[source,terminal]
----
$ echo "--> HostedCluster ControlPlane Secrets:"
for s in $(oc get secret -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} | egrep -v "docker|service-account-token|oauth-openshift|NAME|token-${HC_CLUSTER_NAME}" | awk '{print $1}'); do
oc get secret -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/secret-${s}.yaml
done

# Hosted Control Plane
----
+
.. Back up the hosted control plane by entering the following commands:
+
[source,terminal]
----
$ echo "--> HostedControlPlane:"
$ oc get hcp ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/hcp-${HC_CLUSTER_NAME}.yaml

# Cluster
----
+
[source,terminal]
----
$ oc get hcp ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > \
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/hcp-${HC_CLUSTER_NAME}.yaml
----
+
.. Back up the cluster by entering the following commands:
+
[source,terminal]
----
$ echo "--> Cluster:"
$ CL_NAME=$(oc get hcp ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep ${HC_CLUSTER_NAME})
$ oc get cluster ${CL_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/cl-${HC_CLUSTER_NAME}.yaml

# AWS Cluster
----
+
[source,terminal]
----
$ CL_NAME=$(oc get hcp ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} \
-o jsonpath={.metadata.labels.\*} | grep ${HC_CLUSTER_NAME})
----
+
[source,terminal]
----
$ oc get cluster ${CL_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > \
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/cl-${HC_CLUSTER_NAME}.yaml
----
+
.. Back up the {aws-short} cluster by entering the following commands:
+
[source,terminal]
----
$ echo "--> AWS Cluster:"
$ oc get awscluster ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awscl-${HC_CLUSTER_NAME}.yaml

# AWS MachineTemplate
----
+
[source,terminal]
----
$ oc get awscluster ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > \
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awscl-${HC_CLUSTER_NAME}.yaml
----
+
.. Back up the {aws-short} `MachineTemplate` objects by entering the following commands:
+
[source,terminal]
----
$ echo "--> AWS Machine Template:"
$ oc get awsmachinetemplate ${NODEPOOLS} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awsmt-${HC_CLUSTER_NAME}.yaml

# AWS Machines
----
+
[source,terminal]
----
$ oc get awsmachinetemplate ${NODEPOOLS} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o yaml > \
${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awsmt-${HC_CLUSTER_NAME}.yaml
----
+
.. Back up the {aws-short} `Machines` objects by running the following shell script:
+
[source,terminal]
----
$ echo "--> AWS Machine:"
----
+
[source,terminal]
----
$ CL_NAME=$(oc get hcp ${HC_CLUSTER_NAME} -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o jsonpath={.metadata.labels.\*} | grep ${HC_CLUSTER_NAME})
for s in $(oc get awsmachines -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --no-headers | grep ${CL_NAME} | cut -f1 -d\ ); do
oc get -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} awsmachines $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/awsm-${s}.yaml
done

# MachineDeployments
----
+
.. Back up the `MachineDeployments` objects by running the following shell script:
+
[source,terminal]
----
$ echo "--> HostedCluster MachineDeployments:"
for s in $(oc get machinedeployment -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o name); do
mdp_name=$(echo ${s} | cut -f 2 -d /)
oc get -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/machinedeployment-${mdp_name}.yaml
done

# MachineSets
----
+
.. Back up the `MachineSets` objects by running the following shell script:
+
[source,terminal]
----
$ echo "--> HostedCluster MachineSets:"
for s in $(oc get machineset -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o name); do
ms_name=$(echo ${s} | cut -f 2 -d /)
oc get -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/machineset-${ms_name}.yaml
done

# Machines
----
+
.. Back up the `Machines` objects from the Hosted Control Plane namespace by running the following shell script:
+
[source,terminal]
----
$ echo "--> HostedCluster Machine:"
for s in $(oc get machine -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} -o name); do
m_name=$(echo ${s} | cut -f 2 -d /)
oc get -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} $s -o yaml > ${BACKUP_DIR}/namespaces/${HC_CLUSTER_NS}-${HC_CLUSTER_NAME}/machine-${m_name}.yaml
done
----

. Clean up the `ControlPlane` routes by entering this command:
. Clean up the `ControlPlane` routes by entering the following command:
+
[source,terminal]
----
Expand All @@ -177,7 +287,7 @@ $ oc delete routes -n ${HC_CLUSTER_NS}-${HC_CLUSTER_NAME} --all
+
By entering that command, you enable the ExternalDNS Operator to delete the Route53 entries.

. Verify that the Route53 entries are clean by running this script:
. Verify that the Route53 entries are clean by running the following script:
+
[source,terminal]
----
Expand Down Expand Up @@ -226,4 +336,4 @@ Check all of the {product-title} objects and the S3 bucket to verify that everyt

.Next steps

Restore your hosted cluster.
Restore your hosted cluster.