diff --git a/content/blog/2025-11-17-introducing-ramendr-starter-kit.adoc b/content/blog/2025-11-17-introducing-ramendr-starter-kit.adoc new file mode 100644 index 000000000..eff623441 --- /dev/null +++ b/content/blog/2025-11-17-introducing-ramendr-starter-kit.adoc @@ -0,0 +1,127 @@ +--- + date: 2025-11-17 + title: Introducing RamenDR Starter Kit + summary: A New Pattern to illustrate Regional DR for Virtualization Workloads running on OpenShift Data Foundation + author: Martin Jackson + blog_tags: + - patterns + - announce +--- +:toc: +:imagesdir: /images + +We are excited to announce that the link:https://validatedpatterns.io/patterns/ramendr-starter-kit/[**validatedpatterns-sandbox/ramendr-starter-kit**] repository is now available and has reached the Sandbox tier of Validated Patterns. + +== The Pattern + +This Validated Pattern draws on previous work that models Regional Disaster Recovery, adds Virtualization to +the managed clusters, and starts virtual machines and can fail them over and back between managed clusters. + +The pattern ensures that all of the prerequisites are set up correctly and in order, and ensures that things like +the SSL CA certificate copying that is necessary for both the Ceph replication and the OADP/Velero replication +will work correctly. + +The user is in control of when the failover happens; the pattern provides a script to do the explicit failover +required for Ramen Regional DR of a discovered application. + +== Why Does DR Matter? + +In a perfect world, every application would have its own knowledge of where it is available and would shard and +replicate its own data. But many appplications were built without these concepts in mind, and even if a company +wanted to and could afford re-writing every application, it could not re-write them and deploy them all at once. + +Thus, users benefit from being able to rely on technology products and solutions to enable a regional disaster +recovery capability when the application does not support it natively. + +The ability to recover a workload in the event of a regional disaster is considered a requirement in several +industries for applications that the user deems critical enough to require DR support for, but unable to provide +it natively in the application. + +== Learnings from Developing the Pattern: On the use of AI to generate scripts + +This pattern is also noteworthy in that all of the major shell scripts in the pattern were written by +Cursor. This was a major learning experience, both in the capabilities of modern AI coding tools, and in some +of their limitations. + +=== The Good + +* Error handling and visual output are better than the shell sripts (or Ansible code) I would have written if +I had written all of this from scratch. +* The "inner loop" of development felt a lot faster using the generated code than if I had written it all from +scratch. The value in this pattern is in the use of the components together, not in finding new and novel +ways to retrieve certificate material from a running OpenShift cluster. + +=== The Bad + +* Even when the context "knew" it was working on OpenShift and Hive, it used different mechanisms to retrieve +kubeconfig files for managed clusters. I had to remind it to use a known-good mechanism, which had worked for +downloading kubeconfigs to the user workstation. +* Several of these scripts are bash scripts wrappen in Kubernetes jobs or cronjobs. The generator had some problems +with using local variables in places it could not, and in using shell here documents in places that was not allowed +in YAML. Eventually I set the context that we were better off using `File.get` calls and externalizing the scripts +from the jobs altogether. + +=== The Ugly + +* I am uncomfortable at the level of duplication in the code. Time will tell whether some of these scripts will become +problematic to maintain. A more rigorous analysis might find several opportunities to refactor code. +* The sheer volume of code makes it a bit daunting to look at. All of the major scripts in the pattern are over 150 +lines long, and the longest (as of this publication) is over 1300 lines long. +* Some of the choices of technique and loading dependencies were a bit too generic. We have images for Validated +Patterns that provide things like a Python interpreter with access to the YAML module, the AWS CLI, and other things +that turned out to be useful. I left in the cursor frameworks for downloading things like the AWS CLI, because they +correctly detect that those dependencies are already installed and may prove beneficial if we move to different +images. + +== DR Terminology - What are we talking about? + +**High Availability (“HA”)** includes all characteristics, qualities and workflows of a system that prevent +unavailability events for workloads. This is a very broad category, and includes things like redundancy built into +individual disks, such that failure of a single drive does not result in outage to the workload. Load balancing, +redundant power supplies, running a workload across multiple fault domains, are some of the techniques that belong to +HA, because they keep the workload from becoming unavailable in the first place. Usually HA is completely automatic in that it does not require a real-time human in the loop. + +**Disaster Recovery (“DR”)** includes the characteristics, qualities and workflows of a system to recover from an +outage event when there has been data loss. DR events often also include things that are recognized as major +environmental disasters (such as weather events like hurricanes, tornadoes, fires), or other large-scale problems that +cause widespread devastation or disruption to a location where workloads run, such that critical personnel might also +be affected (i.e. unavailable because they are dead or disabled) and questions of how decisions will be made without +key decision makers are also considered. (This is often included under the heading of “Business Continuity,” which is +closely related to DR.) There are two critical differences between HA and DR: The first is the expectation of human +decision-making in the loop, and the other is the data loss aspect. That is, in a DR event we know we have lost data; +we are working on how much is acceptable to lose and how quickly we can restore workloads. This is what makes it +fundamentally a different thing than HA; but some organizations do not really see or enforce this distinction and that +leads to a lot of confusion. Some vendors also do not strongly make this distinction, which does not discourage that +confusion. + +DR policies can be driven by external regulatory or legal requirements, or an organization’s internal understanding of +what such external legal and regulatory requirements mean. That is to say - the law may not specifically require a +particular level of DR, but the organization interprets the law to mean that is what they need to do to be compliant +to the law or regulation. The Sarbanes Oxley Act (“SOX”) in the US was adopted after the Enron and Worldcom financial +scandals of the early 2000’s, but includes a number of requirements for accurate financial reporting, which many +organizations have used to justify and fund substantial BC/DR programs. + +**Business Continuity: (“BC”, but usually used together with DR as “BCDR” or BC/DR”)** refers to primarily the people +side of recovery from disasters. Large organizations will have teams that focus on BC/DR and use that term in the team +title or name. Such teams will be responsible for making sure that engineering and application groups are compliant to +the organization’s BC/DR policies. This can involve scheduling and running BC/DR “drills” and actual live testing of +BC/DR technologies. + +**Recovery Time Objective (“RTO”)** is the amount of time it takes to restore a failed workload to service. This is +NOT the amount of data that is tolerable to lose - that is defined by the companion RPO. + +**Recovery Point Objective (“RPO”)** is the amount of data a workload can stand to lose. One confusing aspect of RPO is that it can be defined as a time interval (as opposed to, say, a number of transactions). But an RPO of “5 minutes” +should be read as “we want to lose no more than 5 minutes’ worth of data. + +RPO/RTO: So lots of people want a 0/0 RPO/RTO, often without understanding what it takes to implement that. It can be +fantastically expensive, even for the world’s largest and best-funded organizations. + +== Special Thanks + +This pattern was an especially challenging one to design and complete, because of the number of elements in it +and the timing issues inherent in eventual-consistency models. Therefore, special thanks are due to the following +people, without whom this pattern would not exist: + +* The authors of the original link:https://github.com/validatedpatterns/regional-resiliency-pattern[regional-resiliency-pattern], which provided the foundation for the ODF and RamenDR components, and building the managed clusters via Hive +* Aswin Suryanarayanan, who helped immensely with some late challenges with Submariner +* Annette Clewett, without whom this pattern would not exist. Annette took the time to thoroughly explain all of RamenDR's dependencies and how to orchestrate them all correctly. diff --git a/content/patterns/ramendr-starter-kit/_index.adoc b/content/patterns/ramendr-starter-kit/_index.adoc new file mode 100644 index 000000000..47fb1cd70 --- /dev/null +++ b/content/patterns/ramendr-starter-kit/_index.adoc @@ -0,0 +1,75 @@ +--- +title: RamenDR Starter Kit +date: 2025-11-13 +tier: sandbox +summary: This pattern demonstrates the use of Red Hat OpenShift Data Foundations for Virtualization Regional Disaster Recovery +rh_products: +- Red Hat OpenShift Container Platform +- Red Hat OpenShift Virtualization +- Red Hat Enterprise Linux +- Red Hat OpenShift Data Foundation +- Red Hat OpenShift Data Foundation MultiCluster Orchestrator +- Red Hat OpenShift Data Foundation DR Hub Operator +- Red Hat Advanced Cluster Management +industries: [] +aliases: /ramendr-starter-kit/ +pattern_logo: ansible-edge.png +links: + github: https://github.com/validatedpatterns-sandbox/ramendr-starter-kit/ + install: getting-started + bugs: https://github.com/validatedpatterns-sandbox/ramendr-starter-kit/issues + feedback: https://docs.google.com/forms/d/e/1FAIpQLScI76b6tD1WyPu2-d_9CCVDr3Fu5jYERthqLKJDUGwqBg7Vcg/viewform +ci: ramendr-starter-kit +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== RamenDR Regional Disaster Recovery with Virtualization Starter Kit + +This pattern sets up three clusters as recommended for OpenShift Data Foundations Regional Disaster Recovery as +documented link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/index[here]. + +Of additional interest is that the workload that it sets up protection for and can failover involves running virtual +machines. + +The setup process is relatively intricate; the goal of this pattern is to handle all the intricate parts and present +a functional DR-capable starting point for Virtual Machine workloads. In particular this pattern takes care to sequence +installations and validate pre-requisites for all of the core components of the Disaster Recovery system. + +In particular, this pattern must be customized to specify DNS basedomains for the managed clusters, which makes +forking the pattern (which we generally recommend anyway, in case you want to make other customizations) effectively +a requirement. The link:https://validatedpatterns-sandbox/patterns/getting-started[**Getting Started**] doc has +details on what needs to be changed and how to commit and push those changes. + +=== Background + +It would be ideal if all applications in the world understood availability concepts natively and had their own +integrated regional failover strategies. However, many workloads do not, and users who need regional disaster recovery +capabilities need to solve this problem for the applications that cannot solve it for themselves. + +This pattern uses OpenShift Virtualization (the productization of Kubevirt) to simulate the Edge environment for VMs. + +==== Solution elements + +==== Red Hat Technologies + +* Red Hat OpenShift Container Platform (Kubernetes) +* Red Hat Advanced Cluster Management (RHACM) +* Red Hat OpenShift Data Foundations (ODF, including Multicluster Orchestrator) +* Submariner (VPN) +* Red Hat OpenShift GitOps (ArgoCD) +* OpenShift Virtualization (Kubevirt) +* Red Hat Enterprise Linux 9 (on the VMs) + +==== Other technologies this pattern Uses + +* HashiCorp Vault (Community Edition) +* External Secrets Operator (Community Edition) + +=== Architecture + +.ramendr-architecture-diagram +image::/images/ramendr-starter-kit/ramendr-architecture.drawio.png[ramendr-starter-kit-architecture,title="RamenDR Starter Kit Architecture"] diff --git a/content/patterns/ramendr-starter-kit/cluster-sizing.adoc b/content/patterns/ramendr-starter-kit/cluster-sizing.adoc new file mode 100644 index 000000000..98a3b2a61 --- /dev/null +++ b/content/patterns/ramendr-starter-kit/cluster-sizing.adoc @@ -0,0 +1,21 @@ +--- +title: Cluster sizing +weight: 50 +aliases: /ramendr-starter-kit/cluster-sizing/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY + +include::modules/comm-attributes.adoc[] +include::modules/ramendr-starter-kit/metadata-ramendr-starter-kit.adoc[] + +The OpenShift hub cluster is made of 3 Control Plane nodes and 3 Workers for the cluster; the 3 workers are standard +compute nodes. For the node sizes we used the **m5.4xlarge** on AWS. + +This pattern has only been tested on AWS only right now because of the integration of both Hive and OpenShift +Virtualization. We may publish a later revision that supports more hyperscalers. + +include::modules/cluster-sizing-template.adoc[] + diff --git a/content/patterns/ramendr-starter-kit/getting-started.adoc b/content/patterns/ramendr-starter-kit/getting-started.adoc new file mode 100644 index 000000000..059886f56 --- /dev/null +++ b/content/patterns/ramendr-starter-kit/getting-started.adoc @@ -0,0 +1,250 @@ +--- +title: Getting Started +weight: 10 +aliases: /ramendr-starter-kit/getting-started/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +[id="deploying-ramendr-starter-kit-pattern"] +== Deploying the RamenDR Starter Kit Pattern + +.Prerequisites + +* An OpenShift cluster + ** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console]. + ** Select *OpenShift \-> Red Hat OpenShift Container Platform \-> Create cluster*. +* A GitHub account with a personal access token that has repository read and write permissions. +* The Helm binary, for instructions, see link:https://helm.sh/docs/intro/install/[Installing Helm] +* Additional installation tool dependencies. For details, see link:https://validatedpatterns.io/learn/quickstart/[Patterns quick start]. + +It is desirable to have a cluster for deploying the GitOps management hub assets and a separate cluster(s) for the managed cluster(s). + +[id="preparing-for-deployment"] +== Preparing for deployment +.Procedure + +. Fork the link:https://github.com/validatedpatterns-sandbox/ramendr-starter-kit[ramendr-starter-kit] repository on GitHub. You must fork the repository because your fork is updated as part of the GitOps and DevOps processes. + +. Clone the forked copy of this repository. ++ +[source,terminal] +---- +$ git clone git@github.com:your-username/ramendr-starter-kit.git +---- + +. Go to your repository: Ensure you are in the root directory of your Git repository by using: ++ +[source,terminal] +---- +$ cd /path/to/your/repository +---- + +. Run the following command to set the upstream repository: ++ +[source,terminal] +---- +$ git remote add -f upstream git@github.com:validatedpatterns/ramendr-starter-kit.git +---- + +. Verify the setup of your remote repositories by running the following command: ++ +[source,terminal] +---- +$ git remote -v +---- ++ +.Example output ++ +[source,terminal] +---- +origin git@github.com:kquinn1204/ramendr-starter-kit.git (fetch) +origin git@github.com:kquinn1204/ramendr-starter-kit.git (push) +upstream git@github.com:validatedpatterns-sandbox/ramendr-starter-kit.git (fetch) +upstream git@github.com:validatedpatterns-sandbox/ramendr-starter-kit.git (push) +---- + +. Make a local copy of secrets template outside of your repository to hold credentials for the pattern. ++ +[WARNING] +==== +Do not add, commit, or push this file to your repository. Doing so may expose personal credentials to GitHub. +==== ++ +Run the following commands: ++ +[source,terminal] +---- +$ cp values-secret.yaml.template ~/values-secret.yaml +---- + +. Populate this file with secrets, or credentials, that are needed to deploy the pattern successfully: ++ +[source,terminal] +---- +$ vi ~/values-secret.yaml +---- + +.. Edit the `vm-ssh` section to include the username, private key, and public key. To ensure the seamless flow of the pattern, the value associated with the `privatekey` and `publickey` has been updated with `path`. For example: ++ +[source,yaml] +---- + - name: vm-ssh + vaultPrefixes: + - global + fields: + - name: username + value: 'cloud-user' + - name: privatekey + path: '/path/to/private-ssh-key' + - name: publickey + path: '/path/to/public-ssh-key' +---- ++ +Paste the path to your locally stored private and public keys. If you do not have a key pair, generate one using `ssh-keygen`. + +.. Edit the `cloud-init` section to include the `userData` block to use with cloud-init. For example: ++ +[source,yaml] +---- + - name: cloud-init + vaultPrefixes: + - global + fields: + - name: userData + value: |- + #cloud-config + user: 'cloud-user' + password: 'cloud-user' + chpasswd: { expire: False } +---- + +.. Edit the `aws` section to refer to the file containing your AWS credentials: ++ +[source,yaml] +---- + - name: aws + fields: + - name: aws_access_key_id + ini_file: ~/.aws/credentials + ini_key: aws_access_key_id + - name: aws_secret_access_key + ini_file: ~/.aws/credentials + ini_key: aws_secret_access_key + - name: baseDomain + value: aws.example.com + - name: pullSecret + path: ~/pull_secret.json + - name: ssh-privatekey + path: ~/.ssh/privatekey + - name: ssh-publickey + path: ~/.ssh/publickey +---- + +.. Edit the `openshiftPullSecret` section to refer to the file containing your OpenShift pull secret: ++ +[source,yaml] +---- + - name: openshiftPullSecret + fields: + - name: .dockerconfigjson + path: ~/pull_secret.json +---- + +. Create and switch to a new branch named `my-branch`, by running the following command: ++ +[source,terminal] +---- +$ git checkout -b my-branch +---- + +. In particular, you will almost certainly need to customize the pattern because the pattern cannot infer the AWS +domain(s) you have control over. Especially, edit the link:https://github.com/validatedpatterns-sandbox/ramendr-starter-kit/blob/main/charts/hub/rdr/values.yaml[hub/rdr/values.yaml] to edit the `baseDomain` and possibly edit the `aws.region` settings. If you made any changes to this or any other files tracked by git, git add them and then commit the changes by running the following command: ++ +[source,terminal] +---- +$ git commit -m "any updates" +---- + +. Push the changes to your forked repository: ++ +[source,terminal] +---- +$ git push origin my-branch +---- + +The preferred way to install this pattern is by using the script `./pattern.sh` script. + +[id="deploying-cluster-using-patternsh-file"] +== Deploying the pattern by using the pattern.sh file + +To deploy the pattern by using the `pattern.sh` file, complete the following steps: + +. Log in to your cluster by following this procedure: + +.. Obtain an API token by visiting link:https://oauth-openshift.apps../oauth/token/request[https://oauth-openshift.apps../oauth/token/request]. + +.. Log in to the cluster by running the following command: ++ +[source,terminal] +---- +$ oc login --token= --server=https://api..:6443 +---- ++ +Or log in by running the following command: ++ +[source,terminal] +---- +$ export KUBECONFIG=~/ +---- + +. Deploy the pattern to your cluster. Run the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make install +---- + +.Verification + +. Verify that the Operators have been installed on the hub cluster. Navigate to *Operators → Installed Operators* page in the OpenShift Container Platform web console on the Hub cluster (in the "local-cluster" view), ++ +.ramendr-starter-kit-operators +image::/images/ramendr-starter-kit/ramendr-hub-operators.png[ramendr-starter-kit-operators,title="RamenDR Hub Operators"] + + +. Verify that the primary and secondary managed clusters have been built. This can take close to an hour on AWS. On the hub cluster, navigate to *All Clusters* in the OpenShift Container Plaform web console: ++ +.ramendr-starter-kit-clusters +image::/images/ramendr-starter-kit/ramendr-clusters-built.png[ramendr-starter-kit-operators,title="RamenDR Clusters"] +. Wait some time for everything to deploy to all the clusters. It might take up to another hour from when the managed clusters finish building. You can track the progress through the `Hub ArgoCD` UI from the nines menu, especially the "opp-policy" and the "regional-dr" applications. Most of the critical resources are in the regional-dr application (at present, the opp-policy app may show missing/out-of-sync, and the regional-dr app may show OutOfSync - even when both are healthy. We are working on a fix, track bug progress link:https://github.com/validatedpatterns-sandbox/ramendr-starter-kit/issues/4[here]): ++ +.ramendr-starter-kit-operators-applications +image::/images/ramendr-starter-kit/ramendr-starter-kit-hub-applications.png[ramendr-starter-kit-hub-applications,title="RamenDR Starter Kit Applications"] +. Eventually, the Virtual Machines will be deployed and the Disaster Recovery Placement Control (DRPC) will show that resources are now protected. This screen can be reached via *All Clusters → Data Services → Disaster Recovery → Protected Applications* on the hub cluster. Normally it will be faster to synchronize Kubernetes objects than Application volumes. When these indicators both show Healthy it is safe to trigger a failover: ++ +.ramendr-starter-kit-running-vms +image::/images/ramendr-starter-kit/ramendr-starter-kit-running-vms.png[ramendr-starter-kit-running-vms,title="RamenDR Starter Kit Applications"] +. You might want to see the VMs themselves running. They will be on the primary cluster in the *Virtualization → VirtualMachines* area. The pattern configures 4 RHEL9 VMs by default: ++ +.ramendr-starter-kit-trigger-failover +image::/images/ramendr-starter-kit/ramendr-starter-kit-trigger-failover-1.png[ramendr-starter-kit-trigger-failover-1,title="RamenDR Starter Kit Trigger Failover, part 1"] +. Clicking the "Failover" option will bring up a modal dialog that indicates where the failover will move the workload, and when the last known good state of the workload is. Click on the "Initiate" button to begin the failover: ++ +.ramendr-starter-kit-trigger-failover-part-2 +image::/images/ramendr-starter-kit/ramendr-starter-kit-trigger-failover-2.png[ramendr-starter-kit-trigger-failover-2,title="RamenDR Starter Kit Trigger Failover, part 2"] +. While the failover is happening, you can watch the progress of it in the activity area. When it is done, it will say (with a discovered application) that it is necessary to clean up application resources to allow replication to start in the other direction. Notice that the primary cluster should have changed: ++ +.ramendr-starter-kit-trigger-failover-cleanup +image::/images/ramendr-starter-kit/ramendr-starter-kit-failover-cleanup.png[ramendr-starter-kit-failover-cleanup,title="RamenDR Starter Kit Failover Cleanup"] +. The pattern provides a script to do this cleanup. Invoke it with your Hub cluster KUBECONFIG set and running `./pattern.sh scripts/cleanup-gitops-vms-non-primary.sh`: ++ +.ramendr-starter-kit-failover-cleanup-script +image::/images/ramendr-starter-kit/ramendr-starter-kit-failover-cleanup-script.png[ramendr-starter-kit-failover-cleanup-script,title="RamenDR Starter Kit Failover Cleanup"] +. After a few minutes, the resources should show healthy and protected again (the PVCs take a few minutes to synchronize): ++ +.ramendr-starter-kit-failover-reprotected +image::/images/ramendr-starter-kit/ramendr-starter-kit-reprotected.png[ramendr-starter-kit-reprotected,title="RamenDR Starter Kit Reprotected"] diff --git a/content/patterns/ramendr-starter-kit/installation-details.adoc b/content/patterns/ramendr-starter-kit/installation-details.adoc new file mode 100644 index 000000000..20328e8e2 --- /dev/null +++ b/content/patterns/ramendr-starter-kit/installation-details.adoc @@ -0,0 +1,88 @@ +--- +title: Installation Details +weight: 20 +aliases: /ansible-edge-gitops/installation-details/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== Installation Steps + +.Installation Steps + +The pattern will execute the following steps on the cluster: + +. Apply Subscriptions and Applications to Hub Cluster +. This includes ACM, ODF and ODF MultiCluster Operator on the hub cluster +. Build managed clusters (ocp-primary and ocp-secondary) with Hive +.. The managed clusters have identical configuration regarding Subscriptions and Applications, so they are both in the resilient clusterGroup +. opp-policy app is responsible for copying CA certificates to the following places: +.. Creating a configmap cluster-proxy-ca-bundle in namespace openshift-config +.. Assigning this configmap to the proxy cluster resource +.. Adding the certificate material to ramen-dr-cluster-operator config in openshift-dr-system +. regional-dr app is responsible for: +.. ensuring ODF is setup properly +.. Installing submariner add-ons on managed clusters +.. Creating DRPolicy, MirrorPeer, DRPC, and Placement objects for RamenDR +.. Installing the VM workload on the primary cluster +.. Disabling Sync on the regional-dr app to prevent potential conflicts later + +=== Various Scripts included in the pattern and how to use them + +* scripts/cleanup-gitops-vms-non-primary.sh + +Designed to be run when you need to manually cleanup resources from a "failed" cluster. +Intended to be run with the kubeconfig from the hub cluster; it will determine where to +delete resources based on the current DRPC state. + +* scripts/download-kubeconfigs.sh + +Will download and extract the kubeconfigs for the managed clusters to the current directory. +Useful when you need to check something or do something on one of the managed clusters. + +* charts/hub/opp/scripts/argocd-health-monitor.sh + +Ensures that ArgoCD is progressing properly in deploying resources. A workaround for an ArgoCD +bug we ran into during development. + +* charts/hub/opp/scripts/odf-ssl-precheck.sh + +Ensures all the preconditions have been met for extracting certificates to distribute among the +clusters. + +* charts/hub/opp/scripts/odf-ssl-certificate-extraction.sh + +This script does the actual work of extracting and distributing the CA material to the various +places it needs to go. Will also restart velero (OADP) pods if needed. + +* charts/hub/rdr/scripts/odf-dr-prerequisites-check.sh + +Ensures that ODF is fully ready to be configured for Disaster Recovery. In particular waits for ODF +to finish deployment and for the NooBaa/S3 service to be operational on all clusters. + +* charts/hub/rdr/scripts/submariner-prerequisites-check.sh + +Ensures that submariner is running properly and operational on both clusters. This is required for +ODF PVC replication to work. + +* charts/hub/rdr/scripts/edge-gitops-vms-deploy.sh + +This script deploys the VM workload to the primary cluster. It uses the Validated Patterns helm chart but +is not an argo application to avoid starting up resources on clusters where we do not want them running. +Thus it runs from the Hub cluster. + +* charts/hub/rdr/scripts/drpc-health-check-argocd-sync-disable.sh + +This script disables sync on the rdr application to prevent ArgoCD from changing something during the + +* charts/hub/rdr/scripts/submariner-sg-tag.sh + +During development of the pattern we discovered a bug in submariner that can prevent LoadBalancer services from +being created correctly after submariner is installed. This is a workaround for that bug. + + + + diff --git a/static/images/ramendr-starter-kit/ramendr-architecture.drawio.png b/static/images/ramendr-starter-kit/ramendr-architecture.drawio.png new file mode 100644 index 000000000..3d2624956 Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-architecture.drawio.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-clusters-built.png b/static/images/ramendr-starter-kit/ramendr-clusters-built.png new file mode 100644 index 000000000..6194b5327 Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-clusters-built.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-hub-operators.png b/static/images/ramendr-starter-kit/ramendr-hub-operators.png new file mode 100644 index 000000000..56aa23b2e Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-hub-operators.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-starter-kit-failover-cleanup-script.png b/static/images/ramendr-starter-kit/ramendr-starter-kit-failover-cleanup-script.png new file mode 100644 index 000000000..c2ea03584 Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-starter-kit-failover-cleanup-script.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-starter-kit-failover-cleanup.png b/static/images/ramendr-starter-kit/ramendr-starter-kit-failover-cleanup.png new file mode 100644 index 000000000..a79d05517 Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-starter-kit-failover-cleanup.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-starter-kit-hub-applications.png b/static/images/ramendr-starter-kit/ramendr-starter-kit-hub-applications.png new file mode 100644 index 000000000..2f8c0e66e Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-starter-kit-hub-applications.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-starter-kit-hub-resources-protected.png b/static/images/ramendr-starter-kit/ramendr-starter-kit-hub-resources-protected.png new file mode 100644 index 000000000..d0da476b1 Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-starter-kit-hub-resources-protected.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-starter-kit-reprotected.png b/static/images/ramendr-starter-kit/ramendr-starter-kit-reprotected.png new file mode 100644 index 000000000..2a85ae948 Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-starter-kit-reprotected.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-starter-kit-running-vms.png b/static/images/ramendr-starter-kit/ramendr-starter-kit-running-vms.png new file mode 100644 index 000000000..02466d7f6 Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-starter-kit-running-vms.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-starter-kit-trigger-failover-1.png b/static/images/ramendr-starter-kit/ramendr-starter-kit-trigger-failover-1.png new file mode 100644 index 000000000..f3c58a1ae Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-starter-kit-trigger-failover-1.png differ diff --git a/static/images/ramendr-starter-kit/ramendr-starter-kit-trigger-failover-2.png b/static/images/ramendr-starter-kit/ramendr-starter-kit-trigger-failover-2.png new file mode 100644 index 000000000..6295b58bc Binary files /dev/null and b/static/images/ramendr-starter-kit/ramendr-starter-kit-trigger-failover-2.png differ