From 9da4319fb06f986d499888dfc6db9b916326ee3b Mon Sep 17 00:00:00 2001 From: Avani Bhatt Date: Mon, 9 Oct 2023 19:31:46 +0100 Subject: [PATCH] Updating URLS and links post the migration to the new site and the new repo for Validated Patterns --- README.md | 2 +- content/blog/2021-12-31-medical-diagnosis.md | 2 +- content/blog/2022-03-30-multicloud-gitops.md | 4 +- content/blog/2022-09-02-route.md | 8 ++-- content/blog/2022-10-12-acm-provisioning.md | 4 +- content/contribute/contribute-to-docs.adoc | 2 +- content/contribute/extending-a-pattern.md | 24 +++++------ content/learn/faq.adoc | 14 +++---- content/learn/implementation.adoc | 4 +- content/learn/secrets.adoc | 2 +- content/learn/validated.adoc | 2 +- content/learn/workflow.adoc | 10 ++--- .../patterns/ansible-edge-gitops/_index.md | 2 +- .../ansible-automation-platform.md | 32 +++++++-------- .../ansible-edge-gitops/getting-started.md | 6 +-- .../ideas-for-customization.md | 14 +++---- .../installation-details.md | 40 +++++++++---------- .../openshift-virtualization.md | 14 +++---- .../ansible-edge-gitops/troubleshooting.md | 2 +- content/patterns/cockroachdb/_index.md | 2 +- .../connected-vehicle-architecture/_index.md | 2 +- content/patterns/devsecops/_index.md | 2 +- content/patterns/devsecops/getting-started.md | 10 ++--- .../devsecops/secure-supply-chain-demo.md | 4 +- content/patterns/industrial-edge/_index.md | 2 +- .../patterns/industrial-edge/application.md | 4 +- .../industrial-edge/getting-started.md | 12 +++--- .../ideas-for-customization.md | 4 +- .../industrial-edge/troubleshooting.md | 14 +++---- content/patterns/kong-gateway/_index.md | 2 +- .../patterns/medical-diagnosis/_index.adoc | 2 +- .../ideas-for-customization.adoc | 2 +- .../multicloud-gitops-Portworx/_index.md | 2 +- .../ideas-for-customization.md | 2 +- .../patterns/multicloud-gitops/_index.adoc | 2 +- content/patterns/retail/_index.md | 2 +- content/patterns/retail/components.md | 10 ++--- content/patterns/retail/getting-started.md | 4 +- content/patterns/retail/troubleshooting.md | 2 +- layouts/404.html | 4 +- modules/contributing.adoc | 12 +++--- modules/doc-guidelines.adoc | 4 +- modules/mcg-deploying-mcg-pattern.adoc | 4 +- modules/tools-and-setup.adoc | 16 ++++---- 44 files changed, 158 insertions(+), 156 deletions(-) diff --git a/README.md b/README.md index e8c3caa78..3edd468b0 100644 --- a/README.md +++ b/README.md @@ -1,4 +1,4 @@ -# Hybrid Cloud Patterns documentation site +# Validated Patterns documentation site This project contains the new proof-of-concept documentation site for validatedpatterns.io diff --git a/content/blog/2021-12-31-medical-diagnosis.md b/content/blog/2021-12-31-medical-diagnosis.md index c2278a228..035583283 100644 --- a/content/blog/2021-12-31-medical-diagnosis.md +++ b/content/blog/2021-12-31-medical-diagnosis.md @@ -14,7 +14,7 @@ aliases: /2021/12/31/medical-diagnosis/ Our team recently completed the development of a validated pattern that showcases the capabilities we have at our fingertips when we combine OpenShift and other cutting edge Red Hat technologies to deliver a solution. -We've taken an application defined imperatively in an Ansible playbook and converted it into GitOps style declarative kubernetes resources. Using the validated pattern framework we are able to deploy, manage and integrate with multiple cutting edge Red Hat technologies, and provide a capability that the initial deployment strategy didn't have available to it: a lifecycle. Everything you need to take this pattern for a spin is in [git](https://github.com/hybrid-cloud-patterns/medical-diagnosis). +We've taken an application defined imperatively in an Ansible playbook and converted it into GitOps style declarative kubernetes resources. Using the validated pattern framework we are able to deploy, manage and integrate with multiple cutting edge Red Hat technologies, and provide a capability that the initial deployment strategy didn't have available to it: a lifecycle. Everything you need to take this pattern for a spin is in [git](https://github.com/validatedpatterns/medical-diagnosis). ## Pattern Workflow diff --git a/content/blog/2022-03-30-multicloud-gitops.md b/content/blog/2022-03-30-multicloud-gitops.md index 0ea0ce0e4..bee0ce443 100644 --- a/content/blog/2022-03-30-multicloud-gitops.md +++ b/content/blog/2022-03-30-multicloud-gitops.md @@ -12,9 +12,9 @@ aliases: /2022/03/30/multicloud-gitops/ # Validated Pattern: Multi-Cloud GitOps -## Hybrid Cloud Patterns: The Story so far +## Validated Patterns: The Story so far -Our first foray into the realm of Hybrid Cloud Patterns was the adaptation of the MANUela application and its associated tooling to ArgoCD and Tekton, to demonstrate the deployment of a fairly involved IoT application designed to monitor industrial equipment and use AI/ML techniques to predict failure. This resulted in the Industrial Edge validated pattern, which you can see [here](https://github.com/hybrid-cloud-patterns/industrial-edge). +Our first foray into the realm of Validated Patterns was the adaptation of the MANUela application and its associated tooling to ArgoCD and Tekton, to demonstrate the deployment of a fairly involved IoT application designed to monitor industrial equipment and use AI/ML techniques to predict failure. This resulted in the Industrial Edge validated pattern, which you can see [here](https://github.com/validatedpatterns/industrial-edge). This was our first use of a framework to deploy a significant application, and we learned a lot by doing it. It was good to be faced with a number of problems in the “real world” before taking a look at what is really essential for the framework and why. diff --git a/content/blog/2022-09-02-route.md b/content/blog/2022-09-02-route.md index cbb714406..8fd7a49db 100644 --- a/content/blog/2022-09-02-route.md +++ b/content/blog/2022-09-02-route.md @@ -26,7 +26,7 @@ kind: Route metadata: name: hello-openshift spec: - host: hello-openshift-hello-openshift. + host: hello-openshift-hello-openshift. port: targetPort: 8080 to: @@ -75,7 +75,7 @@ metadata: name: hello-openshift namespace: hello-openshift spec: - subdomain: hello-openshift-hello-openshift + subdomain: hello-openshift-hello-openshift port: targetPort: 8080 to: @@ -101,7 +101,7 @@ Now using project "hello-openshift" on server "https://api.magic-mirror-2.bluepr Last but not least now let's apply that example route definition we just created. ```console -$ oc create -f /tmp/route-example.yaml +$ oc create -f /tmp/route-example.yaml route.route.openshift.io/hello-openshift created ``` @@ -148,4 +148,4 @@ As you can see the *subdomain* property was replaced with the *host* property bu Using the *subdomain* property when defining route is super useful if you are deploying your application to different clusters and it will allow you to not have to hard code the ingress domain for every cluster. -If you have any questions or want to see what we are working on please feel free to visit our [Hybrid Cloud Patterns](https://validatedpatterns.io/) site. If you are excited or intrigued by what you see here we’d love to hear your thoughts and ideas! Try the patterns contained in our [Hybrid Cloud Patterns Repo](https://github.com/hybrid-cloud-patterns). We will review your pull requests to our pattern repositories. +If you have any questions or want to see what we are working on please feel free to visit our [Validated Patterns](https://validatedpatterns.io/) site. If you are excited or intrigued by what you see here we’d love to hear your thoughts and ideas! Try the patterns contained in our [Validated Patterns Repo](https://github.com/validatedpatterns). We will review your pull requests to our pattern repositories. diff --git a/content/blog/2022-10-12-acm-provisioning.md b/content/blog/2022-10-12-acm-provisioning.md index 5eec18d36..f407e8a17 100644 --- a/content/blog/2022-10-12-acm-provisioning.md +++ b/content/blog/2022-10-12-acm-provisioning.md @@ -41,7 +41,7 @@ the pay-as-you-go OpenShift managed service. Start by [deploying](https://validatedpatterns.io/multicloud-gitops/getting-started/) the Multi-cloud GitOps pattern on AWS. -Next, you'll need to create a fork of the [multicloud-gitops](https://github.com/hybrid-cloud-patterns/multicloud-gitops/) +Next, you'll need to create a fork of the [multicloud-gitops](https://github.com/validatedpatterns/multicloud-gitops/) repo. Go there in a browser, make sure you’re logged in to GitHub, click the “Fork” button, and confirm the destination by clicking the big green "Create fork" button. @@ -56,7 +56,7 @@ And finally, click through to the installed operator, and select the `Create instance` button and fill out the Create a Pattern form. Most of the defaults are fine, but make sure you update the GitSpec URL to point to your fork of `multicloud-gitops`, rather than -`https://github.com/hybrid-cloud-patterns/multicloud-gitops`. +`https://github.com/validatedpatterns/multicloud-gitops`. ### Providing your Cloud Credentials diff --git a/content/contribute/contribute-to-docs.adoc b/content/contribute/contribute-to-docs.adoc index 67f078d1a..73fa5394a 100644 --- a/content/contribute/contribute-to-docs.adoc +++ b/content/contribute/contribute-to-docs.adoc @@ -7,7 +7,7 @@ weight: 10 :toc: -//Use the Contributor's guide to learn about ways to contribute to the Hybrid Cloud Patterns, to understand the prerequisites and toolchain required for contribution, and to follow some basic documentation style and structure guidelines. +//Use the Contributor's guide to learn about ways to contribute to the Validated Patterns, to understand the prerequisites and toolchain required for contribution, and to follow some basic documentation style and structure guidelines. include::modules/contributing.adoc[leveloffset=+1] include::modules/tools-and-setup.adoc[leveloffset=+1] diff --git a/content/contribute/extending-a-pattern.md b/content/contribute/extending-a-pattern.md index f69e23ac4..a80809d74 100644 --- a/content/contribute/extending-a-pattern.md +++ b/content/contribute/extending-a-pattern.md @@ -8,15 +8,15 @@ aliases: /extending-a-pattern/ # Extending an existing pattern ## Introduction to extending a pattern using a fork -Extending an existing pattern refers to adding a new product and/or configuration to an existing pattern. For example a pattern might be a great fit for a solution but requires the addition of an observability tool, e.g. Prometheus, Grafana, or Elastic. Extending an existing pattern is not very difficult. The advantage is that it automates the integration of this extra product into pattern. +Extending an existing pattern refers to adding a new product and/or configuration to an existing pattern. For example a pattern might be a great fit for a solution but requires the addition of an observability tool, e.g. Prometheus, Grafana, or Elastic. Extending an existing pattern is not very difficult. The advantage is that it automates the integration of this extra product into pattern. Extending usually requires four steps: -1. Adding any required namespace for the product +1. Adding any required namespace for the product 1. Adding a subscription to install and operator 1. Adding one or more ArgoCD applications to manage the post-install configuration of the product 1. Adding the Helm chart needed to implement the post-install configuration identified in step 3. -Sometimes there is no operator in [OperatorHub](https://catalog.redhat.com/software/search?deployed_as=Operator) for the product and it requires installation using a Helm chart. +Sometimes there is no operator in [OperatorHub](https://catalog.redhat.com/software/search?deployed_as=Operator) for the product and it requires installation using a Helm chart. These additions need to be made to the appropriate `values-.yaml` file in the top level pattern directory. If the component is on a hub cluster the file would be `values-hub.yaml`. If it's on a production cluster that would be in `values-production.yaml`. Look at the pattern architecture and decide where you need to add the product. @@ -24,9 +24,9 @@ In the example below AMQ Streams (Kafka) is chosen as a product to add to a patt ## Before starting, fork and clone first -1. Visit the github page for the pattern that you wish to extend. E.g. [multicloud-gitops](https://github.com/hybrid-cloud-patterns/multicloud-gitops). Select “Fork” in the top right corner. +1. Visit the github page for the pattern that you wish to extend. E.g. [multicloud-gitops](https://github.com/validatedpatterns/multicloud-gitops). Select “Fork” in the top right corner. -1. On the create a new fork page, you can choose what owner repository you want and the name of the fork. Most times you will fork into your personal repo and leave the name the same. When you have made the appropriate changes press the "Create fork" button. +1. On the create a new fork page, you can choose what owner repository you want and the name of the fork. Most times you will fork into your personal repo and leave the name the same. When you have made the appropriate changes press the "Create fork" button. 1. You will need to clone from the new fork onto you laptop/desktop so that you can do the extension work effectively. So on the new fork’s main page elect the green “Code” button and copy the git repo’s ssh address. @@ -39,7 +39,7 @@ In the example below AMQ Streams (Kafka) is chosen as a product to add to a patt ``` ## Adding a namespace -The first step is to add a namespace in the `values-.yaml`. Sometimes a specific namespace is expected in other parts of a products configuration. E.g. Red Hat Advanced Cluster Security expects to use the namespace `stackrox`. While you might try using a different namespace you may encounter issues. +The first step is to add a namespace in the `values-.yaml`. Sometimes a specific namespace is expected in other parts of a products configuration. E.g. Red Hat Advanced Cluster Security expects to use the namespace `stackrox`. While you might try using a different namespace you may encounter issues. In our example we are just going to add the namespace `my-kafka`. @@ -48,7 +48,7 @@ In our example we are just going to add the namespace `my-kafka`. namespaces: ... # other namespaces above my-kafka - my-kafka -``` +``` ## Adding a subscription The next step is to add the subscription information for the Kubernetes Operator. Sometimes this subscription needs to be added to a specific namespace, e.g. `openshift-operators`. Check for any operator namespace requirements. In this example just place it in the newly created `my-kafka` namespace. @@ -60,11 +60,11 @@ subscriptions: amq-streams: name: amq-streams namespace: my-kafka -``` +``` ## Adding the ArgoCd application The next step is to add the application information. Sometimes you want to group applications in ArgoCD into a project and you can do this by using an existing project grouping or create a new project group. The example below uses an existing `project` called `my-app`. - + ```yaml --- applications: @@ -73,10 +73,10 @@ applications: namespace: my-kafka project: my-app path: charts/all/kafka -``` +``` ## Adding the Helm Chart -The `path:` tag in the above kafka application tells ArgoCD where to find the Helm Chart needed to deploy this application. Paths are relative the the top level pattern directory and therefore in my example that is `~/git/multicloud-gitops`. +The `path:` tag in the above kafka application tells ArgoCD where to find the Helm Chart needed to deploy this application. Paths are relative the the top level pattern directory and therefore in my example that is `~/git/multicloud-gitops`. ArgoCD will continuously monitor for changes to artifacts in that location for updates to apply. Each different site type would have its own `values-` file listing subscriptions and applications. @@ -149,7 +149,7 @@ metadata: # annotations: # argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true # -# NOTE if needed you can use argocd sync-wave to delay a manifest +# NOTE if needed you can use argocd sync-wave to delay a manifest # argocd.argoproj.io/sync-wave: "3" spec: entityOperator: diff --git a/content/learn/faq.adoc b/content/learn/faq.adoc index 974fd8365..241f0ec5d 100644 --- a/content/learn/faq.adoc +++ b/content/learn/faq.adoc @@ -10,20 +10,20 @@ aliases: /faq/ = FAQ [id="what-is-a-hybrid-cloud-pattern"] -== What is a Hybrid Cloud Pattern? +== What is a Validated Pattern? -Hybrid Cloud Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Hybrid Cloud Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways. +Validated Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Validated Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways. -Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Hybrid Cloud Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use. +Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Validated Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use. -The first Hybrid Cloud Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis. +The first Validated Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis. -We are actively developing new Hybrid Cloud Patterns. Watch this space for updates! +We are actively developing new Validated Patterns. Watch this space for updates! [id="how-are-they-different-from-xyz"] == How are they different from XYZ? -Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Hybrid Cloud Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration. +Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Validated Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration. [id="what-technologies-are-used"] == What technologies are used? @@ -44,7 +44,7 @@ In the future, we expect to further use Red Hat OpenShift, and expand the integr [id="how-are-they-structured"] == How are they structured? -Hybrid Cloud Patterns come in parts - we have a https://github.com/hybrid-cloud-patterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/hybrid-cloud-patterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.) +Validated Patterns come in parts - we have a https://github.com/validatedpatterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/validatedpatterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.) The common repository is primarily concerned with how to deploy the GitOps operator, and to create the namespaces that will be necessary to manage the pattern applications. diff --git a/content/learn/implementation.adoc b/content/learn/implementation.adoc index f05524083..19724248c 100644 --- a/content/learn/implementation.adoc +++ b/content/learn/implementation.adoc @@ -29,7 +29,7 @@ Patterns must not become useless due to bit rot or opaque incompatibilities in c + We distinguish between the provisioning and configuration requirements of the initial cluster ("`Patterns`"), and of clusters/machines managed by the initial cluster (see "`Managed clusters`") -. Patterns *MUST* use a standardized https://github.com/hybrid-cloud-patterns/common/tree/main/clustergroup[clustergroup] Helm chart, as the initial OpenShift GitOps application that describes all namespaces, subscriptions, and any other GitOps applications which contain the configuration elements that make up the solution. +. Patterns *MUST* use a standardized https://github.com/validatedpatterns/common/tree/main/clustergroup[clustergroup] Helm chart, as the initial OpenShift GitOps application that describes all namespaces, subscriptions, and any other GitOps applications which contain the configuration elements that make up the solution. . Managed clusters *MUST* operate on the premise of "`eventual consistency`" (automatic retries, and an expectation of idempotence), which is one of the essential benefits of the GitOps model. . Imperative elements *MUST* be implemented as idempotent code stored in Git @@ -55,7 +55,7 @@ For example, Bucket Notification is a capability in the Medical Diagnosis patter . Patterns SHOULD use Ansible Automation Platform to drive the declarative provisioning and management of managed hosts (e.g. RHEL). See also "`Imperative elements`". . Patterns SHOULD use RHACM to manage policy and compliance on any managed clusters. -. Patterns SHOULD use RHACM and a https://github.com/hybrid-cloud-patterns/common/tree/main/acm[standardized acm chart] to deploy and configure OpenShift GitOps to managed clusters. +. Patterns SHOULD use RHACM and a https://github.com/validatedpatterns/common/tree/main/acm[standardized acm chart] to deploy and configure OpenShift GitOps to managed clusters. . Managed clusters SHOULD be loosely coupled to their hub, and use OpenShift GitOps to consume applications and configuration directly from Git as opposed to having hard dependencies on a centralized cluster. . Managed clusters SHOULD use the "`pull`" deployment model for obtaining their configuration. . Imperative elements SHOULD be implemented as Ansible playbooks diff --git a/content/learn/secrets.adoc b/content/learn/secrets.adoc index d72a00862..cd850b71b 100644 --- a/content/learn/secrets.adoc +++ b/content/learn/secrets.adoc @@ -30,7 +30,7 @@ One area that has been impacted by a more automated approach to security is in t All of these services require credentials. (Or should do!) And keeping those credentials secret is very important. E.g. pushing your credentials to your personal GitHub/GitLab repository is not a secure solution. -While using a file based secret management can work if done correctly, most organizations opt for a more enterprise solution using a secret management product or project. The Cloud Native Computing Foundation (CNCF) has many such https://radar.cncf.io/2021-02-secrets-management[projects]. The Hybrid Cloud Patterns project has started with https://github.com/hashicorp/vault[Hashicorp Vault] secret management product but we look forward to other project contributions. +While using a file based secret management can work if done correctly, most organizations opt for a more enterprise solution using a secret management product or project. The Cloud Native Computing Foundation (CNCF) has many such https://radar.cncf.io/2021-02-secrets-management[projects]. The Validated Patterns project has started with https://github.com/hashicorp/vault[Hashicorp Vault] secret management product but we look forward to other project contributions. [id="whats-next"] == What's next? diff --git a/content/learn/validated.adoc b/content/learn/validated.adoc index 8e60f5318..2204f26d1 100644 --- a/content/learn/validated.adoc +++ b/content/learn/validated.adoc @@ -38,7 +38,7 @@ as the implementations drift apart. In both scenarios the originating team can choose where to host the primary repository, will be given admin permissions to any fork in -https://github.com/hybrid-cloud-patterns, +https://github.com/validatedpatterns, and will receive on-going assistance from the Validated Patterns team. [id="nominating-a-community-pattern-to-become-validated"] diff --git a/content/learn/workflow.adoc b/content/learn/workflow.adoc index 08169a3d8..fbcfd073a 100644 --- a/content/learn/workflow.adoc +++ b/content/learn/workflow.adoc @@ -10,7 +10,7 @@ aliases: /workflow/ = Workflow These patterns are designed to be composed of multiple components, and for those components to be used in gitops -workflows by consumers and contributors. To use the first pattern as an example, we maintain the link:/industrial-edge[Industrial Edge] pattern, which uses a https://github.com/hybrid-cloud-patterns/industrial-edge[repo] with pattern-specific logic and configuration as well as a https://github.com/hybrid-cloud-patterns/common[common repo] which has elements common to multiple patterns. The common repository is included in each pattern repository as a subtree. +workflows by consumers and contributors. To use the first pattern as an example, we maintain the link:/industrial-edge[Industrial Edge] pattern, which uses a https://github.com/validatedpatterns/industrial-edge[repo] with pattern-specific logic and configuration as well as a https://github.com/validatedpatterns/common[common repo] which has elements common to multiple patterns. The common repository is included in each pattern repository as a subtree. [id="consuming-a-pattern"] == Consuming a pattern @@ -50,12 +50,12 @@ workflows) and will be easier to make upstream, if you wish. Contributions from . Customizations to `values-global.yaml` and other files that are particular to your installation . Commits made by Tekton and other automated processes that will be particular to your installation -To isolate changes for upstreaming (`hcp` is "Hybrid Cloud Patterns", you can use a different remote and/or branch name +To isolate changes for upstreaming (`hcp` is "Validated Patterns", you can use a different remote and/or branch name if you want): [source,terminal] ---- -$ git remote add hcp https://github.com/hybrid-cloud-patterns/industrial-edge +$ git remote add hcp https://github.com/validatedpatterns/industrial-edge $ git fetch --all $ git branch -b hcp-main -t hcp/main @@ -147,7 +147,7 @@ When run without arguments, the script will run as if it had been given the foll [source,terminal] ---- -$ common/scripts/make_common_subtree.sh https://github.com/hybrid-cloud-patterns/common.git main common-subtree +$ common/scripts/make_common_subtree.sh https://github.com/validatedpatterns/common.git main common-subtree ---- Which are the defaults the repository is normally configured with. @@ -176,7 +176,7 @@ Subtrees have some pitfalls as well. In the subtree strategy, it is easier to di [id="contributing-to-patterns-using-common-subtrees"] == Contributing to Patterns using Common Subtrees -Once you have forked common and changed your subtree for testing, changes from your fork can then be proposed to [https://github.com/hybrid-cloud-patterns/common.git] and can then be integrated into other patterns. A change to upstream common for a particular upstream pattern would have to be done in two stages: +Once you have forked common and changed your subtree for testing, changes from your fork can then be proposed to [https://github.com/validatedpatterns/common.git] and can then be integrated into other patterns. A change to upstream common for a particular upstream pattern would have to be done in two stages: . PR the change into upstream's common . PR the updated common into the pattern repository diff --git a/content/patterns/ansible-edge-gitops/_index.md b/content/patterns/ansible-edge-gitops/_index.md index 05cc00b9e..40523176d 100644 --- a/content/patterns/ansible-edge-gitops/_index.md +++ b/content/patterns/ansible-edge-gitops/_index.md @@ -16,7 +16,7 @@ pattern_logo: ansible-edge.png links: install: getting-started help: https://groups.google.com/g/hybrid-cloud-patterns - bugs: https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues + bugs: https://github.com/validatedpatterns/ansible-edge-gitops/issues ci: aegitops --- diff --git a/content/patterns/ansible-edge-gitops/ansible-automation-platform.md b/content/patterns/ansible-edge-gitops/ansible-automation-platform.md index 49f00410f..d3e76dbd4 100644 --- a/content/patterns/ansible-edge-gitops/ansible-automation-platform.md +++ b/content/patterns/ansible-edge-gitops/ansible-automation-platform.md @@ -24,12 +24,12 @@ The Secret you are looking for is in the `ansible-automation-platform` project a [![secrets-detail](/images/ansible-edge-gitops/ocp-console-aap-admin-password-detail.png)](/images/ansible-edge-gitops/ocp-console-aap-admin-password-detail.png) -## Via [ansible_get_credentials.sh](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/scripts/ansible_get_credentials.sh) +## Via [ansible_get_credentials.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_get_credentials.sh) With your KUBECONFIG set, you can run `./scripts/ansible-get-credentials.sh` from your top-level pattern directory. This will use your OpenShift cluster admin credentials to retrieve the URL for your Ansible Automation Platform instance, as well as the password for its `admin` user, which is auto-generated by the AAP operator by default. The output of the command looks like this (your password will be different): ```text -./scripts/ansible_get_credentials.sh +./scripts/ansible_get_credentials.sh [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' @@ -68,7 +68,7 @@ localhost : ok=7 changed=0 unreachable=0 failed=0 s # Pattern AAP Configuration Details -In this section, we describe the details of the AAP configuration we apply as part of installing the pattern. All of the configuration discussed in this section is applied by the [ansible_load_controller.sh](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) script. +In this section, we describe the details of the AAP configuration we apply as part of installing the pattern. All of the configuration discussed in this section is applied by the [ansible_load_controller.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) script. ## Loading a Manifest @@ -106,13 +106,13 @@ This CredentialType is considered "secret" because it includes the admin login p The pattern installs an Inventory (HMI Demo), but no inventory sources. This is due to the way that OpenShift Virtualization provides access to virtual machines. The IP address associated with the SSH service that a given VM is running is associated with the Service object on the VM. This is not the way the Kubernetes inventory plugin expects to work. So to make inventory dynamic, we are instead using a play to discover VMs and add them to inventory "on the fly". What is unusual about DNS inside a Kubernetes cluster is that resources outside the namespace must use the cluster FQDN - which is `resource-name.resource-namespace.svc`. -It is also possible to define a static inventory - an example of how this would like is preserved in the pattern repository as [hosts](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/inventory/hosts). +It is also possible to define a static inventory - an example of how this would like is preserved in the pattern repository as [hosts](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/inventory/hosts). -A standard dynamic inventory script is available [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/inventory/openshift_cluster.yml). This will retrieve the object names, but it will not (currently) map the FQDN properly. Because of this limitation, we moved to using the inventory pre-play method. +A standard dynamic inventory script is available [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/inventory/openshift_cluster.yml). This will retrieve the object names, but it will not (currently) map the FQDN properly. Because of this limitation, we moved to using the inventory pre-play method. ## Templates (key playbooks in the pattern) -### [Dynamic Provision Kiosk Playbook](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/dynamic_kiosk_provision.yml) +### [Dynamic Provision Kiosk Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/dynamic_kiosk_provision.yml) This combines all three key workflows in this pattern: @@ -122,15 +122,15 @@ This combines all three key workflows in this pattern: It is safe to run multiple times on the same system. It is run on a schedule, every 10 minutes, to demonstrate this. -### [Kiosk Mode Playbook](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/kiosk_playbook.yml) +### [Kiosk Mode Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/kiosk_playbook.yml) This playbook runs the [kiosk_mode role](/patterns/ansible-edge-gitops/ansible-automation-platform/#roles-included-in-the-pattern). -### [Podman Playbook](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/podman_playbook.yml) +### [Podman Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/podman_playbook.yml) This playbook runs the [container_lifecycle role](/patterns/ansible-edge-gitops/ansible-automation-platform/#roles-included-in-the-pattern) with overrides suitable for the Ignition application container. -### [Ping Playbook](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/ping.yml) +### [Ping Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/ping.yml) This playbook is for testing basic connectivity - making sure that you can reach the nodes you wish to manage, and that the credentials you have given will work on them. It will not change anything on the VMs - just gather facts from them (which requires elevating to root). @@ -148,7 +148,7 @@ This playbook combines the [inventory_preplay](/patterns/ansible-edge-gitops/ans ## Execution Environment -The pattern includes an execution environment definition that can be found [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/tree/main/ansible/execution_environment). +The pattern includes an execution environment definition that can be found [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/ansible/execution_environment). The execution environment includes some additional collections beyond what is provided in the Default execution environment, including: @@ -156,11 +156,11 @@ The execution environment includes some additional collections beyond what is pr * [containers.podman](https://galaxy.ansible.com/containers/podman) * [community.okd](https://docs.ansible.com/ansible/latest/collections/community/okd/index.html) -The execution environment definition is provided if you want to customize or change it; if so, you should also change the Execution Environment attributes of the Templates (in the [load script](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh), those attributes are set by the variables `aap_execution_environment` and `aap_execution_environment_image`). +The execution environment definition is provided if you want to customize or change it; if so, you should also change the Execution Environment attributes of the Templates (in the [load script](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh), those attributes are set by the variables `aap_execution_environment` and `aap_execution_environment_image`). ## Roles included in the pattern -### [kiosk_mode](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/tree/main/ansible/roles/kiosk_mode) +### [kiosk_mode](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/ansible/roles/kiosk_mode) This role is responsible does the following: @@ -169,7 +169,7 @@ This role is responsible does the following: * Installation of Firefox * Configuration of Firefox kiosk mode -### [container_lifecycle](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/tree/main/ansible/roles/container_lifecycle) +### [container_lifecycle](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/ansible/roles/container_lifecycle) This role is responsible for: @@ -179,15 +179,15 @@ This role is responsible for: ## Extra Playbooks in the Pattern -### [inventory_preplay.yml](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/inventory_preplay.yml) +### [inventory_preplay.yml](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/inventory_preplay.yml) This playbook is designed to be included in other plays; its purpose is to discover the desired inventory and add those hosts to inventory at runtime. It uses a kubernetes query via the cluster-admin kube config file. -### [Provision Kiosk Playbook](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/provision_kiosk.yml) +### [Provision Kiosk Playbook](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/provision_kiosk.yml) This does the work of provisioning the kiosk, which configures kiosk mode, and also installs Ignition and configures it to start at boot. It runs the [kiosk_mode](/patterns/ansible-edge-gitops/ansible-automation-platform/#roles-included-in-the-pattern) and [container_lifecycle](/patterns/ansible-edge-gitops/ansible-automation-platform/#roles-included-in-the-pattern) roles. # Next Steps ## [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns) -## [Report Bugs](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/ansible-edge-gitops/getting-started.md b/content/patterns/ansible-edge-gitops/getting-started.md index 63d323b0d..6dd26b84a 100644 --- a/content/patterns/ansible-edge-gitops/getting-started.md +++ b/content/patterns/ansible-edge-gitops/getting-started.md @@ -25,7 +25,7 @@ service](https://console.redhat.com/openshift/create). # Credentials Required in Pattern In addition to the openshift cluster, you will need to prepare a number of secrets, or credentials, which will be used -in the pattern in various ways. To do this, copy the [values-secret.yaml template](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/values-secret.yaml.template) to your home directory as `values-secret.yaml` and replace the explanatory text as follows: +in the pattern in various ways. To do this, copy the [values-secret.yaml template](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-secret.yaml.template) to your home directory as `values-secret.yaml` and replace the explanatory text as follows: * A username and SSH Keypair (private key and public key). These will be used to provide access to the Kiosk VMs in the demo. @@ -124,7 +124,7 @@ To install a collection that is not currently installed: export KUBECONFIG=~/my-ocp-env/hub/auth/kubeconfig ``` -1. Fork the [ansible-edge-gitops](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops) repo on GitHub. It is necessary to fork to preserve customizations you make to the default configuration files. +1. Fork the [ansible-edge-gitops](https://github.com/validatedpatterns/ansible-edge-gitops) repo on GitHub. It is necessary to fork to preserve customizations you make to the default configuration files. 1. Clone the forked copy of this repository. @@ -233,4 +233,4 @@ As part of this pattern HashiCorp Vault has been installed. Refer to the section # Next Steps ## [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns) -## [Report Bugs](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/ansible-edge-gitops/ideas-for-customization.md b/content/patterns/ansible-edge-gitops/ideas-for-customization.md index 69f784b5b..86c400275 100644 --- a/content/patterns/ansible-edge-gitops/ideas-for-customization.md +++ b/content/patterns/ansible-edge-gitops/ideas-for-customization.md @@ -16,7 +16,7 @@ This demo in particular can be customized in a number of ways that might be very 1. Either fork the repo or copy the edge-gitops-vms chart out of it. -1. Customize the [values.yaml](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/values.yaml) file +1. Customize the [values.yaml](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/values.yaml) file The `vms` data structure is designed to support multiple groups and types of VMs. The `kiosk` example defines all of the variables currently supported by the chart, including references to the Vault instance and port definitions. If, for example, you wanted to replace kiosk with new iotsensor and iotgateway types, the whole file might look like this: @@ -79,7 +79,7 @@ vms: targetPort: 1883 ``` -This would create 1 iotgateway VM and 4 iotsensor VMs. Adjustments would also need to be made in [values-secret](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/values-secret.yaml.template) and [ansible-load-controller](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) to add the iotgateway-ssh and iotsensor-ssh data structures. +This would create 1 iotgateway VM and 4 iotsensor VMs. Adjustments would also need to be made in [values-secret](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-secret.yaml.template) and [ansible-load-controller](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) to add the iotgateway-ssh and iotsensor-ssh data structures. # HOWTO define your own VM sets "from scratch" @@ -246,11 +246,11 @@ In just a few minutes, you will have a blank rhel8 VM running, which you can the oc get template -n openshift rhel8-desktop-medium -o yaml > my-template.yaml ``` -Once you have this local template, you can view the elements you want to customize, possibly using [this](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml) as an example. +Once you have this local template, you can view the elements you want to customize, possibly using [this](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml) as an example. # HOWTO Define your own Ansible Controller Configuration -The [ansible_load_controller.sh](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) is designed to be relatively easy to customize with a new controller configuration. Structurally, it is principally based on [configure_controller.yml](https://github.com/redhat-cop/controller_configuration/blob/devel/playbooks/configure_controller.yml) from the Red Hat Community of Practice [controller_configuration](https://github.com/redhat-cop/controller_configuration) collection. The order and specific list of roles invoked is taken from there. +The [ansible_load_controller.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) is designed to be relatively easy to customize with a new controller configuration. Structurally, it is principally based on [configure_controller.yml](https://github.com/redhat-cop/controller_configuration/blob/devel/playbooks/configure_controller.yml) from the Red Hat Community of Practice [controller_configuration](https://github.com/redhat-cop/controller_configuration) collection. The order and specific list of roles invoked is taken from there. To customize it, the main thing would be to replace the different variables in the role tasks with the your own. The script includes the roles for variable types that this pattern does not manage in order to make that part straightforward. Feel free to add your own roles and playbooks (and add them to the controller configuration script). @@ -258,11 +258,11 @@ The reason this pattern ships with a script as it does instead of invoking the r # HOWTO substitute your own container application (instead of ignition) -1. Adjust the query in the [inventory_preplay.yml](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/inventory_preplay.yml) either by overriding the vars for the play, or forking the repo and replacing the vars with your own query terms. (That is, use your own label(s) and namespace to discover the services you want to connect to. +1. Adjust the query in the [inventory_preplay.yml](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/inventory_preplay.yml) either by overriding the vars for the play, or forking the repo and replacing the vars with your own query terms. (That is, use your own label(s) and namespace to discover the services you want to connect to. -1. Adjust or override the vars in the [provision_kiosk.yml](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/provision_kiosk.yml) playbook to suitable values for your own container application. The roles it calls are fairly generic, so changing the vars is all you should need to do. +1. Adjust or override the vars in the [provision_kiosk.yml](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/provision_kiosk.yml) playbook to suitable values for your own container application. The roles it calls are fairly generic, so changing the vars is all you should need to do. # Next Steps ## [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns) -## [Report Bugs](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/ansible-edge-gitops/installation-details.md b/content/patterns/ansible-edge-gitops/installation-details.md index 1e9920df7..5d0329b70 100644 --- a/content/patterns/ansible-edge-gitops/installation-details.md +++ b/content/patterns/ansible-edge-gitops/installation-details.md @@ -8,21 +8,21 @@ aliases: /ansible-edge-gitops/installation-details/ # Installation Steps -These are the steps run by [make install](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/Makefile) and what each one does: +These are the steps run by [make install](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/Makefile) and what each one does: -## [operator-deploy](https://github.com/hybrid-cloud-patterns/common/blob/main/Makefile) +## [operator-deploy](https://github.com/validatedpatterns/common/blob/main/Makefile) The operator-deploy task installs the Validated Patterns Operator, which in turn creates a subscription for the OpenShift GitOps operator and installs both the cluster and hub instances of it. The clustergroup application will then read the values-global.yaml and values-hub.yaml files for other subscriptions and applications to install. -The [legacy-install](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/Makefile) is still provided for users that cannot or do not want to use the Validated Patterns operator. Instead of installing the operator, it installs a helm chart that does the same thing - installs a subscription for OpenShift GitOps and installs a cluster-wide and hub instance of that operator. It then proceeds with installing the clustergroup application. +The [legacy-install](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/Makefile) is still provided for users that cannot or do not want to use the Validated Patterns operator. Instead of installing the operator, it installs a helm chart that does the same thing - installs a subscription for OpenShift GitOps and installs a cluster-wide and hub instance of that operator. It then proceeds with installing the clustergroup application. -Note that both the [upgrade](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/Makefile) and [legacy-upgrade](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/Makefile) targets are now equivalent and interchangeable with `install` and `legacy-install` (respectively - `legacy-install/legacy-upgrade` are not compatible with standard `install/upgrade`. This was not always the case, so both install/upgrade targets are still provided). +Note that both the [upgrade](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/Makefile) and [legacy-upgrade](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/Makefile) targets are now equivalent and interchangeable with `install` and `legacy-install` (respectively - `legacy-install/legacy-upgrade` are not compatible with standard `install/upgrade`. This was not always the case, so both install/upgrade targets are still provided). ### Imperative section -Part of the operator-deploy process is creating and running the [imperative](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/values-hub.yaml) tools as defined in the hub values file. In this pattern, that includes running the playbook to deploy the metal worker. +Part of the operator-deploy process is creating and running the [imperative](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-hub.yaml) tools as defined in the hub values file. In this pattern, that includes running the playbook to deploy the metal worker. -The real code for this playbook (outside of a shell wrapper) is [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/deploy_kubevirt_worker.yml). +The real code for this playbook (outside of a shell wrapper) is [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/deploy_kubevirt_worker.yml). This script is another Ansible playbook that deploys a node to run the Virtual Machines for the demo. The playbook uses the OpenShift machineset API to provision the node in the first availability zone it finds. Currently, AWS is the only major public cloud provider that offers the deployment of a metal node through the normal provisioning process. We hope that Azure and GCP will support this functionality soon as well. @@ -49,27 +49,27 @@ When the `metal-worker` is showing "READY" and "AVAILABLE", the virtual machines The metal node will be destroyed when the cluster is destroyed. The script is idempotent and will create at most one metal node per cluster. -## [post-install](https://github.com/hybrid-cloud-patterns/common/blob/main/Makefile) +## [post-install](https://github.com/validatedpatterns/common/blob/main/Makefile) Note that all the steps of `post-install` are idempotent. If you want or need to reconfigure vault or AAP, the recommended way to do so is to call `make post-install`. This may change as we move elements of this pattern into the new imperative framework in `common`. Specific processes that are called by post-install include: -### [vault-init](https://github.com/hybrid-cloud-patterns/common/blob/main/scripts/vault-utils.sh) +### [vault-init](https://github.com/validatedpatterns/common/blob/main/scripts/vault-utils.sh) Vault requires extra setup in the form of unseal keys and configuration of secrets. The vault-init task does this. Note that it is safe to run vault-init as it will exit successfully if it can connect to a cluster with a running, unsealed vault. -### [load-secrets](https://github.com/hybrid-cloud-patterns/common/blob/main/scripts/vault-utils.sh) +### [load-secrets](https://github.com/validatedpatterns/common/blob/main/scripts/vault-utils.sh) This process (which calls push_secrets) calls an Ansible playbook that reads the values-secret.yaml file and stores the data it finds there in vault as keypairs. These values are then usable in the kubernetes cluster. This pattern uses the ssh pubkey for the kiosk VMs via the external secrets operator. This script will update secrets in vault if re-run; it is safe to re-run if the secret values have not changed as well. -### [configure-controller](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) +### [configure-controller](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) -There are two parts to this script - the first part, with the code [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/ansible_get_credentials.yml), retrieves the admin credentials from OpenShift to enable login to the AAP Controller. +There are two parts to this script - the first part, with the code [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/ansible_get_credentials.yml), retrieves the admin credentials from OpenShift to enable login to the AAP Controller. -The second part, which is the bulk of the ansible-load-controller process is [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/ansible/ansible_configure_controller.yml) and uses the [controller configuration](https://github.com/redhat-cop/controller_configuration) framework to configure the Ansible Automation Platform instance that is installed by the helm chart. +The second part, which is the bulk of the ansible-load-controller process is [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/ansible/ansible_configure_controller.yml) and uses the [controller configuration](https://github.com/redhat-cop/controller_configuration) framework to configure the Ansible Automation Platform instance that is installed by the helm chart. This division is so that users can adapt this pattern more easily if they're running AAP, but not on OpenShift. @@ -89,25 +89,25 @@ The script waits until AAP is ready, and then proceeds to: # OpenShift GitOps (ArgoCD) -OpenShift GitOps is central to this pattern as it is responsible for installing all of the other components. The installation process is driven through the installation of the [clustergroup](https://github.com/hybrid-cloud-patterns/common/tree/main/clustergroup) chart. This in turn reads the repo's [global values file](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/values-global.yaml), which instructs it to read the [hub values file](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/values-hub.yaml). This is how the pattern knows to apply the Subscriptions and Applications listed further in the pattern. +OpenShift GitOps is central to this pattern as it is responsible for installing all of the other components. The installation process is driven through the installation of the [clustergroup](https://github.com/validatedpatterns/common/tree/main/clustergroup) chart. This in turn reads the repo's [global values file](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-global.yaml), which instructs it to read the [hub values file](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/values-hub.yaml). This is how the pattern knows to apply the Subscriptions and Applications listed further in the pattern. # ODF (OpenShift Data Foundations) -ODF is the storage framework that is needed to provide resilient storage for OpenShift Virtualization. It is managed via the helm chart [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/tree/main/charts/hub/openshift-data-foundations). This is basically the same chart that our Medical Diagnosis pattern uses (see [here](/patterns/medical-diagnosis/getting-started/) for details on the Medical Edge pattern's use of storage). +ODF is the storage framework that is needed to provide resilient storage for OpenShift Virtualization. It is managed via the helm chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/openshift-data-foundations). This is basically the same chart that our Medical Diagnosis pattern uses (see [here](/patterns/medical-diagnosis/getting-started/) for details on the Medical Edge pattern's use of storage). Please note that this chart will create a Noobaa S3 bucket named nb.epoch_timestamp.cluster-domain which will not be destroyed when the cluster is destroyed. # OpenShift Virtualization (KubeVirt) -OpenShift Virtualization is a framework for running virtual machines as native Kubernetes resources. While it can run without hardware acceleration, the performance of virtual machines will suffer terribly; some testing on a similar workload indicated a 4-6x delay running without hardware acceleration, so at present this pattern requires hardware acceleration. The pattern provides a script [deploy-kubevirt-worker.sh](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/scripts/deploy_kubevirt_worker.sh) which will provision a metal worker to run virtual machines for the pattern. +OpenShift Virtualization is a framework for running virtual machines as native Kubernetes resources. While it can run without hardware acceleration, the performance of virtual machines will suffer terribly; some testing on a similar workload indicated a 4-6x delay running without hardware acceleration, so at present this pattern requires hardware acceleration. The pattern provides a script [deploy-kubevirt-worker.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/deploy_kubevirt_worker.sh) which will provision a metal worker to run virtual machines for the pattern. OpenShift Virtualization currently supports only AWS and on-prem clusters; this is because of the way that baremetal resources are provisioned in GCP and Azure. We hope that OpenShift Virtualization can support GCP and Azure soon. -The installation of the OpenShift Virtualization HyperConverged deployment is controlled by the chart [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/tree/main/charts/hub/cnv). +The installation of the OpenShift Virtualization HyperConverged deployment is controlled by the chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/cnv). OpenShift Virtualization was chosen in this pattern to avoid dealing with the differences in galleries and templates of images between the different public cloud providers. The important thing from this pattern's standpoint is the availability of machine instances to manage (since we are simulating an Edge deployment scenario, which could either be bare metal instances or virtual machines); OpenShift Virtualization was the easiest and most portable way to spin up machine instances. It also provides mechanisms for defining the desired machine set declaratively. -The creation of virtual machines is controlled by the chart [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/tree/main/charts/hub/edge-gitops-vms). +The creation of virtual machines is controlled by the chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/edge-gitops-vms). More details about the way we use OpenShift Virtualization are available [here](/ansible-edge-gitops/openshift-virtualization). @@ -119,15 +119,15 @@ gives us a way to do that. All of the Ansible interactions are defined in a Git Repository; the Ansible jobs that configure the VMs are designed to be idempotent (and are scheduled to run every 10 minutes on those VMs). -The installation of AAP itself is governed by the chart [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/tree/main/charts/hub/ansible-automation-platform). The post-installation configuration of AAP is done via the [ansible-load-controller.sh](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) script. +The installation of AAP itself is governed by the chart [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/ansible-automation-platform). The post-installation configuration of AAP is done via the [ansible-load-controller.sh](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/scripts/ansible_load_controller.sh) script. It is very much the intention of this pattern to make it easy to replace the specific Edge management use case with another one. Some ideas on how to do that can be found [here](/ansible-edge-gitops/ideas-for-customization/). -Specifics of the Ansible content for this pattern can be seen [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/tree/main/ansible). +Specifics of the Ansible content for this pattern can be seen [here](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/ansible). More details of the specifics of how AAP is configured are available [here](/ansible-edge-gitops/ansible-automation-platform/). # Next Steps ## [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns) -## [Report Bugs](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/ansible-edge-gitops/openshift-virtualization.md b/content/patterns/ansible-edge-gitops/openshift-virtualization.md index d73643232..68fa9b505 100644 --- a/content/patterns/ansible-edge-gitops/openshift-virtualization.md +++ b/content/patterns/ansible-edge-gitops/openshift-virtualization.md @@ -6,9 +6,9 @@ aliases: /ansible-edge-gitops/openshift-virtualization/ # OpenShift Virtualization -# Understanding the Edge GitOps VMs [Helm Chart](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/tree/main/charts/hub/edge-gitops-vms) +# Understanding the Edge GitOps VMs [Helm Chart](https://github.com/validatedpatterns/ansible-edge-gitops/tree/main/charts/hub/edge-gitops-vms) -The heart of the Edge GitOps VMs helm chart is a [template file](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml) that was designed with a fair amount of flexibility in mind. Specifically, it allows you to specify: +The heart of the Edge GitOps VMs helm chart is a [template file](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml) that was designed with a fair amount of flexibility in mind. Specifically, it allows you to specify: 1. One or more "groups" of VMs (such as "kiosk" in our example) with an arbitrary number of instances per group 1. Different sizing parameters (cores, threads, memory, disk size) for each group @@ -230,7 +230,7 @@ windows2k19-server-large Template for Microsoft Windows S windows2k19-server-medium Template for Microsoft Windows Server 2019 VM. A PVC with the Windows disk im... 3 (1 generated) 1 ``` -Additionally, you may copy and customize these templates if you wish. The [template file](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/rhel8-kiosk-with-svc.yaml) is an example of a customized template that was used to help develop this pattern. +Additionally, you may copy and customize these templates if you wish. The [template file](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/rhel8-kiosk-with-svc.yaml) is an example of a customized template that was used to help develop this pattern. ### Creating a VM from the Console via Template @@ -267,7 +267,7 @@ You could also use the "Create VM Wizard" in the OpenShift console. See details [here](/patterns/ansible-edge-gitops/ideas-for-customization/#howto-define-your-own-vm-sets-from-scratch). -## Components of the [virtual-machines](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml) template +## Components of the [virtual-machines](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml) template ### Setup - the mechanism for creating identifiers declaratively @@ -339,7 +339,7 @@ Click on the "three dots" menu on the right, which will open a dialog like the f [![kubevirt411-vm-open-console](/images/ansible-edge-gitops/aeg-kubevirt411-con-ignition.png)](/images/ansible-edge-gitops/aeg-kubevirt411-con-ignition.png) -The virtual machine console view will either show a standard RHEL console login screen, or if the demo is working as designed, it will show the Ignition application running in kiosk mode. If the console shows a standard RHEL login, it can be accessed using the the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, the default cloudInit, or a hardcoded default which can be seen in the template [here](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml). On a VM created through the wizard or via `oc process` from a template, the password will be set on the VirtualMachine object in the `volumes` section. +The virtual machine console view will either show a standard RHEL console login screen, or if the demo is working as designed, it will show the Ignition application running in kiosk mode. If the console shows a standard RHEL login, it can be accessed using the the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, the default cloudInit, or a hardcoded default which can be seen in the template [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml). On a VM created through the wizard or via `oc process` from a template, the password will be set on the VirtualMachine object in the `volumes` section. ### Initial User login (cloud-user) @@ -349,9 +349,9 @@ In general, and before the VMs have been configured by the Ansible Jobs, you can Also included in the edge-gitops-vms chart is a separate template that will allow the creation of VMs with similar (though not identical characteristics) to the ones defined in the chart. -The [rhel8-kiosk-with-svc](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/rhel8-kiosk-with-svc.yaml) template is preserved as an intermediate step to creating your own VM types, to see how the pipeline from default VM template -> customized template -> Helm-variable chart can work. +The [rhel8-kiosk-with-svc](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/rhel8-kiosk-with-svc.yaml) template is preserved as an intermediate step to creating your own VM types, to see how the pipeline from default VM template -> customized template -> Helm-variable chart can work. # Next Steps ## [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns) -## [Report Bugs](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues) +## [Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues) diff --git a/content/patterns/ansible-edge-gitops/troubleshooting.md b/content/patterns/ansible-edge-gitops/troubleshooting.md index 1dbb184d8..1cc54f822 100644 --- a/content/patterns/ansible-edge-gitops/troubleshooting.md +++ b/content/patterns/ansible-edge-gitops/troubleshooting.md @@ -6,6 +6,6 @@ aliases: /ansible-edge-gitops/troubleshooting/ # Troubleshooting -## Our [Issue Tracker](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues) +## Our [Issue Tracker](https://github.com/validatedpatterns/ansible-edge-gitops/issues) Please file an issue if you see a problem! diff --git a/content/patterns/cockroachdb/_index.md b/content/patterns/cockroachdb/_index.md index b1d8b0ea2..4c8a613b9 100644 --- a/content/patterns/cockroachdb/_index.md +++ b/content/patterns/cockroachdb/_index.md @@ -10,4 +10,4 @@ products: A multicloud pattern using cockroachdb and submariner, deployed via RHACM. -[Repo](https://github.com/hybrid-cloud-patterns/cockroachdb-pattern) \ No newline at end of file +[Repo](https://github.com/validatedpatterns/cockroachdb-pattern) \ No newline at end of file diff --git a/content/patterns/connected-vehicle-architecture/_index.md b/content/patterns/connected-vehicle-architecture/_index.md index ea10e3133..476e61e62 100644 --- a/content/patterns/connected-vehicle-architecture/_index.md +++ b/content/patterns/connected-vehicle-architecture/_index.md @@ -10,4 +10,4 @@ products: A distributed cloud-native application that implements key aspects of a modern IoT architecture. -[Repo](https://github.com/hybrid-cloud-patterns/connected-vehicle-architecture) +[Repo](https://github.com/validatedpatterns/connected-vehicle-architecture) diff --git a/content/patterns/devsecops/_index.md b/content/patterns/devsecops/_index.md index 7f7baa0c5..7a53e9b57 100644 --- a/content/patterns/devsecops/_index.md +++ b/content/patterns/devsecops/_index.md @@ -16,7 +16,7 @@ aliases: /devsecops/ links: install: getting-started help: https://groups.google.com/g/hybrid-cloud-patterns - bugs: https://github.com/hybrid-cloud-patterns/multicluster-devsecops/issues + bugs: https://github.com/validatedpatterns/multicluster-devsecops/issues ci: devsecops --- diff --git a/content/patterns/devsecops/getting-started.md b/content/patterns/devsecops/getting-started.md index 81a2f130f..efae64726 100644 --- a/content/patterns/devsecops/getting-started.md +++ b/content/patterns/devsecops/getting-started.md @@ -9,7 +9,7 @@ aliases: /devsecops/getting-started/ # Prerequisites 1. An OpenShift cluster (Go to [the OpenShift console](https://console.redhat.com/openshift/create)). Cluster must have a dynamic StorageClass to provision PersistentVolumes. See also [sizing your cluster](../../devsecops/cluster-sizing). -1. A second OpenShift cluster for development using secure CI pipelines. +1. A second OpenShift cluster for development using secure CI pipelines. 1. A third OpenShift cluster for production. (optional but desirable) 1. A GitHub account (and a token for it with repositories permissions, to read from and write to your forks) 1. Tools Podman and Git. (see below) @@ -21,7 +21,7 @@ service](https://console.redhat.com/openshift/create). # Credentials Required in Pattern In addition to the openshift cluster, you will need to prepare a number of secrets, or credentials, which will be used -in the pattern in various ways. To do this, copy the [values-secret.yaml template](https://github.com/hybrid-cloud-patterns/multicluster-devsecops/blob/main/values-secret.yaml.template) to your home directory as `values-secret.yaml` and replace the explanatory text as follows: +in the pattern in various ways. To do this, copy the [values-secret.yaml template](https://github.com/validatedpatterns/multicluster-devsecops/blob/main/values-secret.yaml.template) to your home directory as `values-secret.yaml` and replace the explanatory text as follows: * Your git repository username and password. The password must be base64 encoded. @@ -54,7 +54,7 @@ secrets: * Git command line tool ([git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)) * Podman command line tool ([podman](https://podman.io/getting-started/installation)) -1. Fork the [Multicluster DevSecOps](https://github.com/hybrid-cloud-patterns/multicluster-devsecops) repository on GitHub. It is necessary to fork because your fork will be updated as part of the GitOps and DevSecOps processes. The **Fork** information and pull down menu can be found on the top right of the GitHub page for a pattern. Select the pull down an select **Create a new fork**. +1. Fork the [Multicluster DevSecOps](https://github.com/validatedpatterns/multicluster-devsecops) repository on GitHub. It is necessary to fork because your fork will be updated as part of the GitOps and DevSecOps processes. The **Fork** information and pull down menu can be found on the top right of the GitHub page for a pattern. Select the pull down an select **Create a new fork**. 1. Clone the forked copy of the `multicluster-devsecops` repository. Use branch `v1.0`. (Clone in an appropriate sub-dir) @@ -191,7 +191,7 @@ Click on the "Refresh web console" link. 1. Return to the ACS Central tab and paste the password into the password field. Make sure that the Username is `admin`. -1. This will bring you to the ACS Central dashboard page. At first it may not show any clusters showing but as the ACS secured deployment on the hub syncs with ACS central on the hub then information will start to show. +1. This will bring you to the ACS Central dashboard page. At first it may not show any clusters showing but as the ACS secured deployment on the hub syncs with ACS central on the hub then information will start to show. [![ACS Central dashboard](/images/devsecops/acs-dashboard.png)](/images/devsecops/acs-dashboard.png) @@ -264,7 +264,7 @@ Advanced Cluster Security needs to be integrated with Quay Enterprise registry. # Next Steps [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns){: .btn .fs-5 .mb-4 .mb-md-0 .mr-2 } -[Report Bugs](https://github.com/hybrid-cloud-patterns/multicluster-devsecops/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } +[Report Bugs](https://github.com/validatedpatterns/multicluster-devsecops/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } Once the hub has been setup correctly and confirmed to be working, you can: diff --git a/content/patterns/devsecops/secure-supply-chain-demo.md b/content/patterns/devsecops/secure-supply-chain-demo.md index b299b481b..32227877f 100644 --- a/content/patterns/devsecops/secure-supply-chain-demo.md +++ b/content/patterns/devsecops/secure-supply-chain-demo.md @@ -26,7 +26,9 @@ Make sure you have the `kubeadmin` administrator login for the data center clust You will need to login into GitHub and be able to fork two repositories. -* hybrid-cloud-patterns/multicluster-devsecops +* validatedpatterns/multicluster-devsecops + + * hybrid-cloud-patterns/chat-client ## Pipeline Demos diff --git a/content/patterns/industrial-edge/_index.md b/content/patterns/industrial-edge/_index.md index f67578294..611927414 100644 --- a/content/patterns/industrial-edge/_index.md +++ b/content/patterns/industrial-edge/_index.md @@ -17,7 +17,7 @@ links: install: getting-started arch: https://www.redhat.com/architect/portfolio/architecturedetail?ppid=26 help: https://groups.google.com/g/hybrid-cloud-patterns - bugs: https://github.com/hybrid-cloud-patterns/industrial-edge/issues + bugs: https://github.com/validatedpatterns/industrial-edge/issues ci: manuela --- diff --git a/content/patterns/industrial-edge/application.md b/content/patterns/industrial-edge/application.md index 4db9cdc4a..3959bf799 100644 --- a/content/patterns/industrial-edge/application.md +++ b/content/patterns/industrial-edge/application.md @@ -38,8 +38,8 @@ Make sure you have the `kubeadmin` administrator login for the data center clust You will need to login into GitHub and be able to fork two repositories. -* hybrid-cloud-patterns/industrial-edge -* hybrid-cloud-patterns/manuela-dev +* validatedpatterns/industrial-edge +* validatedpatterns-demos/manuela-dev ## Configuration changes with GitOps diff --git a/content/patterns/industrial-edge/getting-started.md b/content/patterns/industrial-edge/getting-started.md index 021e0b5a6..86972f412 100644 --- a/content/patterns/industrial-edge/getting-started.md +++ b/content/patterns/industrial-edge/getting-started.md @@ -35,9 +35,9 @@ For installation tooling dependencies, see [Patterns quick start]({{< ref "/cont # How to deploy -1. Fork the [industrial-edge](https://github.com/hybrid-cloud-patterns/industrial-edge) repository on GitHub. It is necessary to fork because your fork will be updated as part of the GitOps and DevOps processes. +1. Fork the [industrial-edge](https://github.com/validatedpatterns/industrial-edge) repository on GitHub. It is necessary to fork because your fork will be updated as part of the GitOps and DevOps processes. -1. Fork the [manuela-dev](https://github.com/hybrid-cloud-patterns/manuela-dev) repository on GitHub. It is necessary to fork this repository because the GitOps framework will push tags to this repository that match the versions of software that it will deploy. +1. Fork the [manuela-dev](https://github.com/validatedpatterns-demos/manuela-dev) repository on GitHub. It is necessary to fork this repository because the GitOps framework will push tags to this repository that match the versions of software that it will deploy. 1. Clone the forked copy of the `industrial-edge` repository. Create a deployment branch using the branch `v2.3`. @@ -61,7 +61,7 @@ For installation tooling dependencies, see [Patterns quick start]({{< ref "/cont vi ~/values-secret-industrial-edge.yaml ``` -1. Customize the following secret values. +1. Customize the following secret values. ```yaml version: "2.0" @@ -137,7 +137,7 @@ For installation tooling dependencies, see [Patterns quick start]({{< ref "/cont git push origin deploy-v2.3 ``` -1. You can deploy the pattern using the [Validated Patterns Operator](/infrastructure/using-validated-pattern-operator/) directly. If you deploy the pattern using the Validated Patterns Operator, installed through `Operator Hub`, you will need to run `make load-secrets` through a terminal session on your laptop or bastion host. +1. You can deploy the pattern using the [Validated Patterns Operator](/infrastructure/using-validated-pattern-operator/) directly. If you deploy the pattern using the Validated Patterns Operator, installed through `Operator Hub`, you will need to run `make load-secrets` through a terminal session on your laptop or bastion host. 1. If you deploy the pattern through a terminal session on your laptop or bastion host login to your cluster by using the`oc login` command or by exporting the `KUBECONFIG` file. @@ -160,7 +160,7 @@ For installation tooling dependencies, see [Patterns quick start]({{< ref "/cont # Validating the Environment -1. In the OpenShift Container Platform web console, navigate to the **Operators → OperatorHub** page. +1. In the OpenShift Container Platform web console, navigate to the **Operators → OperatorHub** page. 2. Verify that the following Operators are installed on the HUB cluster: ```text @@ -228,7 +228,7 @@ For installation tooling dependencies, see [Patterns quick start]({{< ref "/cont ## Next Steps [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns){: .btn .fs-5 .mb-4 .mb-md-0 .mr-2 } -[Report Bugs](https://github.com/hybrid-cloud-patterns/industrial-edge/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } +[Report Bugs](https://github.com/validatedpatterns/industrial-edge/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } Once the data center has been setup correctly and confirmed to be working, you can: diff --git a/content/patterns/industrial-edge/ideas-for-customization.md b/content/patterns/industrial-edge/ideas-for-customization.md index e8a064b23..1e97b6a94 100644 --- a/content/patterns/industrial-edge/ideas-for-customization.md +++ b/content/patterns/industrial-edge/ideas-for-customization.md @@ -14,7 +14,7 @@ This demo in particular can be customized in a number of ways that might be very # HOWTO Forking the Industrial Edge repository to your github account -Hopefully we are all familiar with GitHub. If you are not GitHub is a code hosting platform for version control and collaboration. It lets you and others work together on projects from anywhere. Our Industrial Edge GitOps repository is available in our [Hybrid Cloud Patterns GitHub](https://github.com/hybrid-cloud-patterns "Hybrid Cloud Patterns Homepage") organization. +Hopefully we are all familiar with GitHub. If you are not GitHub is a code hosting platform for version control and collaboration. It lets you and others work together on projects from anywhere. Our Industrial Edge GitOps repository is available in our [Validated Patterns GitHub](https://github.com/validatedpatterns "Validated Patterns Homepage") organization. To fork this repository, and deploy the Industrial Edge pattern, follow the steps found in our [Getting Started](https://validatedpatterns.io/industrial-edge/getting-started "Industrial Edge Getting Started Guide") section. This will allow you to follow the next few HOWTO guides in this section. @@ -62,4 +62,4 @@ The idea is that this pattern can be used for other use cases keeping the main c What ideas for customization do you have? Can you use this pattern for other use cases? Let us know through our feedback link below. [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns){: .btn .fs-5 .mb-4 .mb-md-0 .mr-2 } -[Report Bugs](https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } +[Report Bugs](https://github.com/validatedpatterns/ansible-edge-gitops/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } diff --git a/content/patterns/industrial-edge/troubleshooting.md b/content/patterns/industrial-edge/troubleshooting.md index 0e99dd2fe..98bc71109 100644 --- a/content/patterns/industrial-edge/troubleshooting.md +++ b/content/patterns/industrial-edge/troubleshooting.md @@ -6,7 +6,7 @@ aliases: /industrial-edge/troubleshooting/ # Troubleshooting -## Our [Issue Tracker](https://github.com/hybrid-cloud-patterns/industrial-edge/issues) +## Our [Issue Tracker](https://github.com/validatedpatterns/industrial-edge/issues) ## Installation-phase Failures @@ -47,7 +47,7 @@ The industrial edge pattern runs two post-install operations after creating the **Extracting the secret from the datacenter ArgoCD instance for use in the Pipelines** -This depends on the installation of both the cluster-wide GitOps operator, and the installation of an instance in the datacenter namespace. The logic is controlled [here](https://github.com/hybrid-cloud-patterns/industrial-edge/blob/main/Makefile) (where the parameters are set) and [here](https://github.com/hybrid-cloud-patterns/common/blob/main/Makefile), which does the interactions with the cluster (to extract the secret and create a resource in manuela-ci). +This depends on the installation of both the cluster-wide GitOps operator, and the installation of an instance in the datacenter namespace. The logic is controlled [here](https://github.com/validatedpatterns/industrial-edge/blob/main/Makefile) (where the parameters are set) and [here](https://github.com/validatedpatterns/common/blob/main/Makefile), which does the interactions with the cluster (to extract the secret and create a resource in manuela-ci). This task runs first, and if it does not complete, the seed pipeline will not start either. Things to check: @@ -85,9 +85,9 @@ In general, use the project-supplied `global.options.UseCSV` setting of `False`. #### Symptom: "User not found" error in first stage of pipeline run -**Cause:** Despite the message, the error is most likely that you don't have a fork of [manuela-dev](https://github.com/hybrid-cloud-patterns/manuela-dev). +**Cause:** Despite the message, the error is most likely that you don't have a fork of [manuela-dev](https://github.com/validatedpatterns-demos/manuela-dev). -**Resolution:** Fork [manuela-dev](https://github.com/hybrid-cloud-patterns/manuela-dev) into your namespace in GitHub and run `make seed`. +**Resolution:** Fork [manuela-dev](https://github.com/validatedpatterns-demos/manuela-dev) into your namespace in GitHub and run `make seed`. #### Symptom: Intermittent failures in Pipeline stages @@ -120,7 +120,7 @@ desirable to do so, since multiple pipelines attempting to change the repository **Resolution:** Run `make seed` in the root of the repository OR re-run the failed pipeline segment (e.g. seed-iot-frontend or seed-iot-consumer). -We're looking into better long-term fixes for a number of the situations that can cause these situations as [#40](https://github.com/hybrid-cloud-patterns/industrial-edge/issues/40). +We're looking into better long-term fixes for a number of the situations that can cause these situations as [#40](https://github.com/validatedpatterns/industrial-edge/issues/40). #### Symptom: Error in "push-*" pipeline tasks @@ -161,10 +161,10 @@ rpc error: code = Unknown desc = Manifest generation error (cached): `/bin/bash **Cause:** -This is a byproduct of the way the pattern installs applications at the moment. We are tracking this as [#39](https://github.com/hybrid-cloud-patterns/industrial-edge/issues/39). +This is a byproduct of the way the pattern installs applications at the moment. We are tracking this as [#39](https://github.com/validatedpatterns/industrial-edge/issues/39). #### Symptom: Applications show "not in sync" status in ArgoCD **Cause:** There is a discrepancy between what the git repository says the application should have, and how that state is realized in ArgoCD. -The installation mechanism currently installs operators as parts of multiple applications when running on the same cluster, so it is a race condition in ArgoCD to see which one "wins." This is a problem with the way we are installing the patterns. We are tracking this as [#38](https://github.com/hybrid-cloud-patterns/industrial-edge/issues/38). +The installation mechanism currently installs operators as parts of multiple applications when running on the same cluster, so it is a race condition in ArgoCD to see which one "wins." This is a problem with the way we are installing the patterns. We are tracking this as [#38](https://github.com/validatedpatterns/industrial-edge/issues/38). diff --git a/content/patterns/kong-gateway/_index.md b/content/patterns/kong-gateway/_index.md index ef47fee68..63133357e 100644 --- a/content/patterns/kong-gateway/_index.md +++ b/content/patterns/kong-gateway/_index.md @@ -9,4 +9,4 @@ products: A pattern for Kong Gateway Control Plane and Data Plane demo -[Repo](https://github.com/hybrid-cloud-patterns/kong-gateway) \ No newline at end of file +[Repo](https://github.com/validatedpatterns/kong-gateway) \ No newline at end of file diff --git a/content/patterns/medical-diagnosis/_index.adoc b/content/patterns/medical-diagnosis/_index.adoc index fa6cf967d..f8c705c89 100644 --- a/content/patterns/medical-diagnosis/_index.adoc +++ b/content/patterns/medical-diagnosis/_index.adoc @@ -15,7 +15,7 @@ links: install: getting-started arch: https://www.redhat.com/architect/portfolio/architecturedetail?ppid=6 help: https://groups.google.com/g/hybrid-cloud-patterns - bugs: https://github.com/hybrid-cloud-patterns/medical-diagnosis/issues + bugs: https://github.com/validatedpatterns/medical-diagnosis/issues ci: medicaldiag --- diff --git a/content/patterns/medical-diagnosis/ideas-for-customization.adoc b/content/patterns/medical-diagnosis/ideas-for-customization.adoc index 5dea450e1..fba7350e2 100644 --- a/content/patterns/medical-diagnosis/ideas-for-customization.adoc +++ b/content/patterns/medical-diagnosis/ideas-for-customization.adoc @@ -29,4 +29,4 @@ The {med-pattern} can answer the call to either of these requirements by using These are just a few ideas to help you understand how you could use the {med-pattern} as a framework for your application. //We have relevant links on the patterns page -//AI: Why does this point to AEG though? https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues[Report Bugs] +//AI: Why does this point to AEG though? https://github.com/validatedpatterns/ansible-edge-gitops/issues[Report Bugs] diff --git a/content/patterns/multicloud-gitops-Portworx/_index.md b/content/patterns/multicloud-gitops-Portworx/_index.md index 49892a4a7..a96de25d8 100644 --- a/content/patterns/multicloud-gitops-Portworx/_index.md +++ b/content/patterns/multicloud-gitops-Portworx/_index.md @@ -14,7 +14,7 @@ pattern_logo: multicloud-gitops-Portworx.png links: install: getting-started help: https://groups.google.com/g/hybrid-cloud-patterns - bugs: https://github.com/hybrid-cloud-patterns/medical-diagnosis/issues + bugs: https://github.com/validatedpatterns/medical-diagnosis/issues # ci: mcgitopspxe --- diff --git a/content/patterns/multicloud-gitops-Portworx/ideas-for-customization.md b/content/patterns/multicloud-gitops-Portworx/ideas-for-customization.md index f04542778..19ed35e06 100644 --- a/content/patterns/multicloud-gitops-Portworx/ideas-for-customization.md +++ b/content/patterns/multicloud-gitops-Portworx/ideas-for-customization.md @@ -26,4 +26,4 @@ In the end the possibilities to tweak this pattern are endless. Do let us know i >Contribute to this pattern: [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns){: .btn .fs-5 .mb-4 .mb-md-0 .mr-2 } -[Report Bugs](https://github.com/hybrid-cloud-patterns/multicloud-gitops/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } +[Report Bugs](https://github.com/validatedpatterns/multicloud-gitops/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } diff --git a/content/patterns/multicloud-gitops/_index.adoc b/content/patterns/multicloud-gitops/_index.adoc index b42359d72..40658d293 100644 --- a/content/patterns/multicloud-gitops/_index.adoc +++ b/content/patterns/multicloud-gitops/_index.adoc @@ -14,7 +14,7 @@ links: install: mcg-getting-started arch: https://www.redhat.com/architect/portfolio/detail/8-hybrid-multicloud-management-with-gitops help: https://groups.google.com/g/hybrid-cloud-patterns - bugs: https://github.com/hybrid-cloud-patterns/multicloud-gitops/issues + bugs: https://github.com/validatedpatterns/multicloud-gitops/issues ci: mcgitops --- :toc: diff --git a/content/patterns/retail/_index.md b/content/patterns/retail/_index.md index 64e34fef5..933ef84a6 100644 --- a/content/patterns/retail/_index.md +++ b/content/patterns/retail/_index.md @@ -15,7 +15,7 @@ aliases: /retail/ links: install: getting-started help: https://groups.google.com/g/hybrid-cloud-patterns - bugs: https://github.com/hybrid-cloud-patterns/retail/issues + bugs: https://github.com/validatedpatterns/retail/issues # uncomment once this exists # ci: retail --- diff --git a/content/patterns/retail/components.md b/content/patterns/retail/components.md index f446bb730..a3448224f 100644 --- a/content/patterns/retail/components.md +++ b/content/patterns/retail/components.md @@ -6,7 +6,7 @@ aliases: /retail/components/ # Component Details -## The Quarkus Coffeeshop Store [Chart](https://github.com/hybrid-cloud-patterns/retail/tree/main/charts/store/quarkuscoffeeshop-charts) +## The Quarkus Coffeeshop Store [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/store/quarkuscoffeeshop-charts) This chart is responsible for deploying the applications, services and routes for the Quarkus Coffeeshop demo. It models a set of microservices that would make sense for a coffeeshop retail operation. The detail of what the microservices do is [here](https://quarkuscoffeeshop.github.io/coffeeshop/). @@ -48,7 +48,7 @@ All the components look like this in ArgoCD when deployed: The chart is designed such that the same chart can be deployed in the hub cluster as the "production" store, the "demo" or TEST store, and on a remote cluster. -## The Quarkus Coffeeshop Database [Chart](https://github.com/hybrid-cloud-patterns/retail/tree/main/charts/all/crunchy-pgcluster) +## The Quarkus Coffeeshop Database [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/all/crunchy-pgcluster) This installs a database instance suitable for use in the Retail pattern. It uses the Crunchy PostgreSQL [Operator](https://github.com/CrunchyData/postgres-operator) to provide PostgreSQL services, which includes high availability and backup services by default, and other features available. @@ -58,11 +58,11 @@ In ArgoCD, it looks like this: [![retail-v1-argo-coffeeshopdb](/images/retail/retail-v1-argo-coffeeshopdb.png)](/images/retail/retail-v1-argo-coffeeshopdb.png) -## The Quarkus Coffeeshop Kafka [Chart](https://github.com/hybrid-cloud-patterns/retail/tree/main/charts/all/quarkuscoffeeshop-kafka) +## The Quarkus Coffeeshop Kafka [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/all/quarkuscoffeeshop-kafka) This chart installs Kafka for use in the Retail pattern. It uses the Red Hat AMQ Streams [operator](https://access.redhat.com/documentation/en-us/red_hat_amq/7.2/html/using_amq_streams_on_openshift_container_platform/index). -## The Quarkus Coffeeshop Pipelines [Chart](https://github.com/hybrid-cloud-patterns/retail/tree/main/charts/hub/quarkuscoffeeshop-pipelines) +## The Quarkus Coffeeshop Pipelines [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/hub/quarkuscoffeeshop-pipelines) The pipelines chart defines build pipelines using the Red Hat OpenShift Pipelines [Operator](https://catalog.redhat.com/software/operators/detail/5ec54a4628834587a6b85ca5) (tektoncd). Pipelines are provided for all of the application images that ship with the pattern; the pipelines all build the app from source, deploy them to the "demo" namespace, and push them to the configured image registry. @@ -70,7 +70,7 @@ Like the store and database charts, the kafka chart supports all three modes of [![retail-v1-argo-pipelines](/images/retail/retail-v1-argo-pipelines.png)](/images/retail/retail-v1-argo-pipelines.png) -## The Quarkus Coffeeshop Landing Page [Chart](https://github.com/hybrid-cloud-patterns/retail/tree/main/charts/all/landing-page) +## The Quarkus Coffeeshop Landing Page [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/all/landing-page) The Landing Page chart builds the page that presents the links for the demos in the pattern. diff --git a/content/patterns/retail/getting-started.md b/content/patterns/retail/getting-started.md index 845fb7811..405d9212f 100644 --- a/content/patterns/retail/getting-started.md +++ b/content/patterns/retail/getting-started.md @@ -40,7 +40,7 @@ Install the installation tooling dependencies. You will need: ## How to deploy -1. Fork the [retail](https://github.com/hybrid-cloud-patterns/retail) repository on GitHub. +1. Fork the [retail](https://github.com/validatedpatterns/retail) repository on GitHub. 1. Clone the forked copy of the `retail` repo. Use branch `v1.0'. @@ -165,4 +165,4 @@ Clicking on the respective Kafdrop links will go to a Kafdrop instance that allo ## Next Steps [Help & Feedback](https://groups.google.com/g/hybrid-cloud-patterns){: .btn .fs-5 .mb-4 .mb-md-0 .mr-2 } -[Report Bugs](https://github.com/hybrid-cloud-patterns/retail/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } +[Report Bugs](https://github.com/validatedpatterns/retail/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } diff --git a/content/patterns/retail/troubleshooting.md b/content/patterns/retail/troubleshooting.md index 74013c887..afc1800fa 100644 --- a/content/patterns/retail/troubleshooting.md +++ b/content/patterns/retail/troubleshooting.md @@ -6,4 +6,4 @@ aliases: /retail/troubleshooting/ # Troubleshooting -## Our [Issue Tracker](https://github.com/hybrid-cloud-patterns/industrial-edge/issues) +## Our [Issue Tracker](https://github.com/validatedpatterns/industrial-edge/issues) diff --git a/layouts/404.html b/layouts/404.html index fd9b4528c..6a252f8f5 100644 --- a/layouts/404.html +++ b/layouts/404.html @@ -30,7 +30,7 @@

Suggested content

- Get answers to commonly asked questions about hybrid cloud patterns.
+ Get answers to commonly asked questions about Validated Patterns.
@@ -42,7 +42,7 @@

Suggested content

-
Read the latest blog posts from the Hybrid Cloud Patterns team.
+
Read the latest blog posts from the Validated Patterns team.
diff --git a/modules/contributing.adoc b/modules/contributing.adoc index 21ced4f3a..5d87893cf 100644 --- a/modules/contributing.adoc +++ b/modules/contributing.adoc @@ -2,21 +2,21 @@ :imagesdir: ../../images [id="contributing-to-docs-contributing"] -= Contribute to Hybrid Cloud Patterns documentation += Contribute to Validated Patterns documentation == Different ways to contribute -There are a few different ways you can contribute to Hybrid Cloud Patterns documentation: +There are a few different ways you can contribute to Validated Patterns documentation: -* Email the Hybrid Cloud Patterns team at mailto:hybrid-cloud-patterns@googlegroups.com[hybrid-cloud-patterns@googlegroups.com]. -* Create a link:https://github.com/hybrid-cloud-patterns/docs/issues[GitHub] or link:https://issues.redhat.com/projects/MBP/issues[Jira issue] . +* Email the Validated Patterns team at mailto:hybrid-cloud-patterns@googlegroups.com[hybrid-cloud-patterns@googlegroups.com]. +* Create a link:https://github.com/validatedpatterns/docs/issues[GitHub] or link:https://issues.redhat.com/projects/MBP/issues[Jira issue] . //to-do: Add link to the contribution workflow when we have a proper one. You might need to create a new file -* Submit a pull request (PR). To create a PR, create a local clone of your own fork of the link:https://github.com/hybrid-cloud-patterns/docs[Hybrid Cloud Patterns docs repository], make your changes, and submit a PR. This option is best if you have substantial changes. +* Submit a pull request (PR). To create a PR, create a local clone of your own fork of the link:https://github.com/validatedpatterns/docs[Validated Patterns docs repository], make your changes, and submit a PR. This option is best if you have substantial changes. //to-do:For more details on creating a PR see . == Contribution workflow -When you submit a PR, the https://github.com/orgs/hybrid-cloud-patterns/teams/docs[Hybrid Cloud Patterns Docs team] reviews the PR and arranges further reviews by Quality Engineering (QE), subject matter experts (SMEs), and others, as required. If the PR requires changes, updates, or corrections, the reviewers add comments in the PR. The documentation team merges the PR after you have implemented all feedback, and you have squashed all commits. +When you submit a PR, the https://github.com/orgs/hybrid-cloud-patterns/teams/docs[Validated Patterns Docs team] reviews the PR and arranges further reviews by Quality Engineering (QE), subject matter experts (SMEs), and others, as required. If the PR requires changes, updates, or corrections, the reviewers add comments in the PR. The documentation team merges the PR after you have implemented all feedback, and you have squashed all commits. == Repository organization diff --git a/modules/doc-guidelines.adoc b/modules/doc-guidelines.adoc index 3919d7a04..74d0c9f8c 100644 --- a/modules/doc-guidelines.adoc +++ b/modules/doc-guidelines.adoc @@ -3,7 +3,7 @@ [id="contributing-to-docs-doc-guidelines"] = Documentation guidelines -Documentation guidelines for contributing to the Hybrid Cloud Patterns Docs +Documentation guidelines for contributing to the Validated Patterns Docs == General guidelines @@ -124,7 +124,7 @@ include::_common-docs/common-attributes.adoc[] <4> <1> The content type for the file. For assemblies, always use `:_content-type: ASSEMBLY`. Place this attribute before the anchor ID or, if present, the conditional that contains the anchor ID. <2> A unique anchor ID for this assembly. Use lowercase. Example: cli-developer-commands <3> Human readable title (notice the '=' top-level header) -<4> Includes attributes common to Hybrid Cloud Patterns docs. +<4> Includes attributes common to Validated Patterns docs. <5> Context used for identifying headers in modules that is the same as the anchor ID. Example: cli-developer-commands. <6> A blank line. You *must* have a blank line here before the toc. <7> The table of contents for the current assembly. diff --git a/modules/mcg-deploying-mcg-pattern.adoc b/modules/mcg-deploying-mcg-pattern.adoc index 844733400..fc5697057 100644 --- a/modules/mcg-deploying-mcg-pattern.adoc +++ b/modules/mcg-deploying-mcg-pattern.adoc @@ -12,7 +12,7 @@ ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. See link:../../multicloud-gitops/mcg-cluster-sizing[sizing your cluster]. * Optional: A second OpenShift cluster for multicloud demonstration. //Replaced git and podman prereqs with the tooling dependencies page -* https://hybrid-cloud-patterns.io/learn/quickstart/[Install the tooling dependencies]. +* https://validatedpatterns.io/learn/quickstart/[Install the tooling dependencies]. The use of this pattern depends on having at least one running Red Hat OpenShift cluster. However, consider creating a cluster for deploying the GitOps management hub assets and a separate cluster for the managed cluster. @@ -23,7 +23,7 @@ public or private cloud by using https://console.redhat.com/openshift/create[Red . https://validatedpatterns.io/learn/quickstart/[Install the tooling dependencies]. + -. Fork the https://github.com/hybrid-cloud-patterns/multicloud-gitops[multicloud-gitops] repository on GitHub. +. Fork the https://github.com/validatedpatterns/multicloud-gitops[multicloud-gitops] repository on GitHub. . Clone the forked copy of this repository. + [source,terminal] diff --git a/modules/tools-and-setup.adoc b/modules/tools-and-setup.adoc index 953739b0f..56b2847d2 100644 --- a/modules/tools-and-setup.adoc +++ b/modules/tools-and-setup.adoc @@ -5,7 +5,7 @@ = Install and set up the tools and software == Create a GitHub account -Before you can contribute to Hybrid Cloud Patterns documentation, you must +Before you can contribute to Validated Patterns documentation, you must https://www.github.com/join[sign up for a GitHub account]. == Set up authentication @@ -20,11 +20,11 @@ Confirm authentication is working correctly with the following command: $ ssh -T git@github.com ---- -== Fork and clone the Hybrid Cloud Patterns documentation repository +== Fork and clone the Validated Patterns documentation repository -You must fork and set up the Hybrid Cloud Patterns documentation repository on your workstation so that you can create PRs and contribute. These steps must only be performed during initial setup. +You must fork and set up the Validated Patterns documentation repository on your workstation so that you can create PRs and contribute. These steps must only be performed during initial setup. -. Fork the https://github.com/hybrid-cloud-patterns/docs repository into your +. Fork the https://github.com/validatedpatterns/docs repository into your GitHub account from the GitHub UI. Click *Fork* in the upper right-hand corner. . In the terminal on your workstation, change into the directory where you want @@ -45,7 +45,7 @@ $ git clone git@github.com:/docs.git $ cd docs ---- -. Add an upstream pointer back to the Hybrid Cloud Patterns's remote repository, in this +. Add an upstream pointer back to the Validated Patterns's remote repository, in this case _docs_. + [source,terminal] @@ -58,7 +58,7 @@ repository in sync with it. == Install Asciidoctor -The Hybrid Cloud Patterns documentation is created in AsciiDoc language, and is processed with http://asciidoctor.org/[AsciiDoctor], which is an AsciiDoc language processor. +The Validated Patterns documentation is created in AsciiDoc language, and is processed with http://asciidoctor.org/[AsciiDoctor], which is an AsciiDoc language processor. === Prerequisites @@ -72,9 +72,9 @@ The following are minimum requirements: === Preview the documentation using a container image -You can use the container image to build the Hybrid Cloud Patterns documentation, locally. To do so, ensure that you have installed the `make` and `podman` tools. +You can use the container image to build the Validated Patterns documentation, locally. To do so, ensure that you have installed the `make` and `podman` tools. - * In the terminal window, navigate to the local instance of the `hybrid-cloud-patterns/docs` repository and run the following command: + * In the terminal window, navigate to the local instance of the `validatedpatterns/docs` repository and run the following command: [source,terminal] ----