From 5fc149d4afff9c4c9079dfc1538b098b5aa7920e Mon Sep 17 00:00:00 2001 From: Avani Bhatt Date: Wed, 6 Sep 2023 19:14:32 +0100 Subject: [PATCH] Convert .md file to .adoc file under the Learn content type --- content/learn/{_index.md => _index.adoc} | 0 content/learn/{about.md => about.adoc} | 32 ++-- content/learn/community.adoc | 43 +++++ content/learn/community.md | 38 ---- content/learn/{faq.md => faq.adoc} | 46 +++-- content/learn/implementation.adoc | 74 ++++++++ content/learn/implementation.md | 68 ------- content/learn/importing-a-cluster.adoc | 11 +- ...{infrastructure.md => infrastructure.adoc} | 12 +- ...ing.md => ocp-cluster-general-sizing.adoc} | 90 ++++++---- ...g.md => rhel-for-edge-general-sizing.adoc} | 7 +- content/learn/{secrets.md => secrets.adoc} | 27 +-- content/learn/validated.adoc | 123 +++++++++++++ content/learn/validated.md | 114 ------------ content/learn/{vault.md => vault.adoc} | 34 ++-- content/learn/{workflow.md => workflow.adoc} | 168 ++++++++++-------- 16 files changed, 486 insertions(+), 401 deletions(-) rename content/learn/{_index.md => _index.adoc} (100%) rename content/learn/{about.md => about.adoc} (68%) create mode 100644 content/learn/community.adoc delete mode 100644 content/learn/community.md rename content/learn/{faq.md => faq.adoc} (56%) create mode 100644 content/learn/implementation.adoc delete mode 100644 content/learn/implementation.md rename content/learn/{infrastructure.md => infrastructure.adoc} (59%) rename content/learn/{ocp-cluster-general-sizing.md => ocp-cluster-general-sizing.adoc} (68%) rename content/learn/{rhel-for-edge-general-sizing.md => rhel-for-edge-general-sizing.adoc} (58%) rename content/learn/{secrets.md => secrets.adoc} (67%) create mode 100644 content/learn/validated.adoc delete mode 100644 content/learn/validated.md rename content/learn/{vault.md => vault.adoc} (66%) rename content/learn/{workflow.md => workflow.adoc} (60%) diff --git a/content/learn/_index.md b/content/learn/_index.adoc similarity index 100% rename from content/learn/_index.md rename to content/learn/_index.adoc diff --git a/content/learn/about.md b/content/learn/about.adoc similarity index 68% rename from content/learn/about.md rename to content/learn/about.adoc index fbd242c51..9ee62d037 100644 --- a/content/learn/about.md +++ b/content/learn/about.adoc @@ -3,12 +3,11 @@ menu: learn title: About Validated Patterns weight: 10 --- - -# About Validated Patterns +:toc: Validated Patterns and upstream Community Patterns are a natural progression from reference architectures with additional value. Here is a brief video to explain what patterns are all about: -[![patterns-intro-video](https://img.youtube.com/vi/lI8TurakeG4/0.jpg)](https://www.youtube.com/watch?v=lI8TurakeG4) +image::https://img.youtube.com/vi/lI8TurakeG4/0.jpg[patterns-intro-video,link=https://www.youtube.com/watch?v=lI8TurakeG4] This effort is focused on customer solutions that involve multiple Red Hat products. The patterns include one or more applications that are based on successfully deployed customer examples. Example application code is provided as a demonstration, along with the various open source projects and Red Hat products required to for the deployment to work. Users can then modify the pattern for their own specific application. @@ -17,13 +16,15 @@ How do we select and produce a pattern? We look for novel customer use cases, ob The automation also enables the solution to be added to Continuous Integration (CI), with triggers for new product versions (including betas), so that we can proactively find and fix breakage and avoid bit-rot. -## Who should use these patterns? +[id="who-should-use-these-patterns"] +== Who should use these patterns? -It is recommended that architects or advanced developers with knowledge of Kubernetes and Red Hat OpenShift Container Platform use these patterns. There are advanced [Cloud Native](https://www.cncf.io/projects/) concepts and projects deployed as part of the pattern framework. These include, but are not limited to, OpenShift Gitops ([ArgoCD](https://argoproj.github.io/argo-cd/)), Advanced Cluster Management ([Open Cluster Management](https://open-cluster-management.io/)), and OpenShift Pipelines ([Tekton](https://tekton.dev/)) +It is recommended that architects or advanced developers with knowledge of Kubernetes and Red Hat OpenShift Container Platform use these patterns. There are advanced https://www.cncf.io/projects/[Cloud Native] concepts and projects deployed as part of the pattern framework. These include, but are not limited to, OpenShift Gitops (https://argoproj.github.io/argo-cd/[ArgoCD]), Advanced Cluster Management (https://open-cluster-management.io/[Open Cluster Management]), and OpenShift Pipelines (https://tekton.dev/[Tekton]) -## General Structure +[id="general-structure"] +== General Structure -All patterns assume an OpenShift cluster is available to deploy the application(s) that are part of the pattern. If you do not have an OpenShift cluster, you can use [cloud.redhat.com](https://console.redhat.com/openshift). +All patterns assume an OpenShift cluster is available to deploy the application(s) that are part of the pattern. If you do not have an OpenShift cluster, you can use https://console.redhat.com/openshift[cloud.redhat.com]. The documentation will use the `oc` command syntax but `kubectl` can be used interchangeably. For each deployment it is assumed that the user is logged into a cluster using the `oc login` command or by exporting the `KUBECONFIG` path. @@ -31,26 +32,27 @@ The diagram below outlines the general deployment flow of a datacenter applicati But first the user must create a fork of the pattern repository. This allows changes to be made to operational elements (configurations etc.) and to application code that can then be successfully made to the forked repository for DevOps continuous integration (CI). Clone the directory to your laptop/desktop. Future changes can be pushed to your fork. -![GitOps for Datacenter](/images/gitops-datacenter.png) - - 1. Make a copy of the values file. There may be one or more values files. E.g. `values-global.yaml` and/or `values-datacenter.yaml`. While most of these values allow you to specify subscriptions, operators, applications and other application specifics, there are also *secrets* which may include encrypted keys or user IDs and passwords. It is important that you make a copy and **do not push your personal values file to a repository accessible to others!** +image::/images/gitops-datacenter.png[GitOps for Datacenter] - 2. Deploy the application as specified by the pattern. This may include a Helm command (`helm install`) or a make command (`make deploy`). +. Make a copy of the values file. There may be one or more values files. E.g. `values-global.yaml` and/or `values-datacenter.yaml`. While most of these values allow you to specify subscriptions, operators, applications and other application specifics, there are also _secrets_ which may include encrypted keys or user IDs and passwords. It is important that you make a copy and *do not push your personal values file to a repository accessible to others!* +. Deploy the application as specified by the pattern. This may include a Helm command (`helm install`) or a make command (`make deploy`). When the workload is deployed the pattern first deploys OpenShift GitOps. OpenShift GitOps will then take over and make sure that all application and the components of the pattern are deployed. This includes required operators and application code. Most patterns will have an Advanced Cluster Management operator deployed so that multi-cluster deployments can be managed. -## Edge Patterns +[id="edge-patterns"] +== Edge Patterns Some patterns include both a data center and one or more edge clusters. The diagram below outlines the general deployment flow of applications on an edge application. The edge OpenShift cluster is often deployed on a smaller cluster than the datacenter. Sometimes this might be a three node cluster that allows workloads to be deployed on the master nodes. The edge cluster might be a single node cluster (SN0). It might be deployed on bare metal, on local virtual machines or in a public/private cloud. Provision the cluster (see above) -![GitOps for Edge](/images/gitops-edge.png) +image::/images/gitops-edge.png[GitOps for Edge] - 3. Import/join the cluster to the hub/data center. Instructions for importing the cluster can be found [here]. You're done. +. Import/join the cluster to the hub/data center. Instructions for importing the cluster can be found [here]. You're done. When the cluster is imported, ACM on the datacenter will deploy an ACM agent and agent-addon pod into the edge cluster. Once installed and running ACM will then deploy OpenShift GitOps onto the cluster. Then OpenShift GitOps will deploy whatever applications are required for that cluster based on a label. -## OpenShift GitOps (a.k.a ArgoCD) +[id="openshift-gitops-argocd"] +== OpenShift GitOps (a.k.a ArgoCD) When OpenShift GitOps is deployed and running in a cluster (datacenter or edge) you can launch its console by choosing ArgoCD in the upper left part of the OpenShift Console (TO-DO whenry to add an image and clearer instructions here) diff --git a/content/learn/community.adoc b/content/learn/community.adoc new file mode 100644 index 000000000..3f40e0f9f --- /dev/null +++ b/content/learn/community.adoc @@ -0,0 +1,43 @@ +--- +menu: + learn: + parent: Workflow +title: Community Patterns +weight: 42 +aliases: /requirements/community/ +--- + +:toc: + += Community Pattern Requirements + +[id="tldr"] +== tl;dr + +* *What are they:* Best practice implementations conforming to the Validated Patterns implementation practices +* *Purpose:* Codify best practices and promote collaboration between different groups inside, and external to, Red Hat +* *Creator:* Customers, Partners, GSIs, Services/Consultants, SAs, and other Red Hat teams + +[id="requirements"] +== Requirements + +General requirements for all Community, and Validated patterns + +[id="base"] +=== Base + +. Patterns *MUST* include a top-level README highlighting the business problem and how the pattern solves it +. Patterns *MUST* include an architecture drawing. The specific tool/format is flexible as long as the meaning is clear. +. Patterns *MUST* undergo an informal architecture review by a community leader to ensure that the solution has the right products, and they are generally being used as intended. ++ +For example: not using a database as a message bus. +As community leaders, contributions from within Red Hat may be subject to a higher level of scrutiny +While we strive to be inclusive, the community will have quality standards and generally using the framework does not automatically imply a solution is suitable for the community to endorse/publish. + +. Patterns *MUST* undergo an informal technical review by a community leader to ensure that it conforms to the link:/requirements/implementation/[technical requirements] and meets basic reuse standards +. Patterns *MUST* document their support policy ++ +It is anticipated that most community patterns will be supported by the community on a best-effort basis, but this should be stated explicitly. +The validated patterns team commits to maintaining the framework but will also accept help. + +. Patterns SHOULD include a recorded demo highlighting the business problem and how the pattern solves it diff --git a/content/learn/community.md b/content/learn/community.md deleted file mode 100644 index 36189769e..000000000 --- a/content/learn/community.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -menu: - learn: - parent: Workflow -title: Community Patterns -weight: 42 -aliases: /requirements/community/ ---- - -# Community Pattern Requirements - -## tl;dr - -* **What are they:** Best practice implementations conforming to the Validated Patterns implementation practices -* **Purpose:** Codify best practices and promote collaboration between different groups inside, and external to, Red Hat -* **Creator:** Customers, Partners, GSIs, Services/Consultants, SAs, and other Red Hat teams - -## Requirements - -General requirements for all Community, and Validated patterns - -### Base - -1. Patterns **MUST** include a top-level README highlighting the business problem and how the pattern solves it -1. Patterns **MUST** include an architecture drawing. The specific tool/format is flexible as long as the meaning is clear. -1. Patterns **MUST** undergo an informal architecture review by a community leader to ensure that the solution has the right products, and they are generally being used as intended. - - For example: not using a database as a message bus. - As community leaders, contributions from within Red Hat may be subject to a higher level of scrutiny - While we strive to be inclusive, the community will have quality standards and generally using the framework does not automatically imply a solution is suitable for the community to endorse/publish. - -1. Patterns **MUST** undergo an informal technical review by a community leader to ensure that it conforms to the [technical requirements](/requirements/implementation/) and meets basic reuse standards -1. Patterns **MUST** document their support policy - - It is anticipated that most community patterns will be supported by the community on a best-effort basis, but this should be stated explicitly. - The validated patterns team commits to maintaining the framework but will also accept help. - -1. Patterns SHOULD include a recorded demo highlighting the business problem and how the pattern solves it diff --git a/content/learn/faq.md b/content/learn/faq.adoc similarity index 56% rename from content/learn/faq.md rename to content/learn/faq.adoc index c50fa8d3a..974fd8365 100644 --- a/content/learn/faq.md +++ b/content/learn/faq.adoc @@ -5,40 +5,46 @@ weight: 90 aliases: /faq/ --- -# FAQ +:toc: -## What is a Hybrid Cloud Pattern? += FAQ + +[id="what-is-a-hybrid-cloud-pattern"] +== What is a Hybrid Cloud Pattern? Hybrid Cloud Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Hybrid Cloud Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways. Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Hybrid Cloud Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use. -The first Hybrid Cloud Pattern is based on [MANUela](https://github.com/sa-mw-dach/manuela), an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis. +The first Hybrid Cloud Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis. We are actively developing new Hybrid Cloud Patterns. Watch this space for updates! -## How are they different from XYZ? +[id="how-are-they-different-from-xyz"] +== How are they different from XYZ? Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Hybrid Cloud Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration. -## What technologies are used? +[id="what-technologies-are-used"] +== What technologies are used? Key technologies in the stack for Industrial Edge include: -- Red Hat OpenShift Container Platform -- Red Hat Advanced Cluster Management -- Red Hat OpenShift GitOps (based on ArgoCD) -- Red Hat OpenShift Pipelines (based on tekton) -- Red Hat Integration - AMQ Broker (ActiveMQ Artemis MQTT) -- Red Hat Integration - AMQ Streams (Kafka) -- Red Hat Integration - Camel K -- Seldon Operator +* Red Hat OpenShift Container Platform +* Red Hat Advanced Cluster Management +* Red Hat OpenShift GitOps (based on ArgoCD) +* Red Hat OpenShift Pipelines (based on tekton) +* Red Hat Integration - AMQ Broker (ActiveMQ Artemis MQTT) +* Red Hat Integration - AMQ Streams (Kafka) +* Red Hat Integration - Camel K +* Seldon Operator In the future, we expect to further use Red Hat OpenShift, and expand the integrations with other elements of the ecosystem. How can the concept of GitOps integrate with a fleet of devices that are not running Kubernetes? What about integrations with baremetal or VM servers? Sounds like a job for Ansible! We expect to tackle some of these problems in future patterns. -## How are they structured? +[id="how-are-they-structured"] +== How are they structured? -Hybrid Cloud Patterns come in parts - we have a [common](https://github.com/hybrid-cloud-patterns/common) repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - [industrial edge](https://github.com/hybrid-cloud-patterns/industrial-edge). This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.) +Hybrid Cloud Patterns come in parts - we have a https://github.com/hybrid-cloud-patterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/hybrid-cloud-patterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.) The common repository is primarily concerned with how to deploy the GitOps operator, and to create the namespaces that will be necessary to manage the pattern applications. @@ -46,14 +52,16 @@ The pattern repository has the application-specific layout, and determines which Each application is described as a series of resources that are rendered into GitOps (ArgoCD) via Helm and Kustomize. The values for these charts are set by values files that need to be "personalized" (with your local cluster values) as the first step of installation. Subsequent pushes to the gitops repository will be reflected in the clusters running the applications. -## Who is behind this? +[id="who-is-behind-this"] +== Who is behind this? Today, a team of Red Hat engineers including Andrew Beekhof (@beekhof), Lester Claudio (@claudiol), Martin Jackson (@mhjacks), William Henry (@ipbabble), Michele Baldessari (@mbaldessari), Jonny Rickard (@day0hero) and others. Excited or intrigued by what you see here? We'd love to hear your thoughts and ideas! Try the patterns contained here and see below for links to our repositories and issue trackers. -## How can I get involved? +[id="how-can-i-get-involved"] +== How can I get involved? -Try out what we've done and submit issues to our [issue trackers](https://github.com/hybrid-cloud-patterns/industrial-edge/issues). +Try out what we've done and submit issues to our https://github.com/validatedpatterns/industrial-edge/issues[issue trackers]. -We will review pull requests to our [pattern](https://github.com/hybrid-cloud-patterns/common) [repositories](https://validatedpatterns.io/industrial-edge). +We will review pull requests to our https://github.com/validatedpatterns/common[pattern] https://github.com/validatedpatterns/industrial-edge[repositories]. diff --git a/content/learn/implementation.adoc b/content/learn/implementation.adoc new file mode 100644 index 000000000..f05524083 --- /dev/null +++ b/content/learn/implementation.adoc @@ -0,0 +1,74 @@ +--- +menu: + learn: + parent: Workflow +title: Implementation Requirements +weight: 41 +aliases: /requirements/implementation/ +--- + +:toc: + +[id="technical-requirements"] +== Technical Requirements + +Additional requirements specific to the implementation for all Community, and Validated patterns + +[id="must"] +=== Must + +. Patterns *MUST* include one or more Git repositories, in a publicly accessible location, containing configuration elements that can be consumed by the OpenShift GitOps operator (ArgoCD) without supplying custom ArgoCD images. +. Patterns *MUST* be useful without all content stored in private git repos +. Patterns *MUST* include a list of names and versions of all the products and projects being consumed by the pattern +. Patterns *MUST* be useful without any sample applications that are private or lack public sources. ++ +Patterns must not become useless due to bit rot or opaque incompatibilities in closed source "`applications`". + +. Patterns *MUST NOT* store sensitive data elements, including but not limited to passwords, in Git +. Patterns *MUST* be possible to deploy on any IPI-based OpenShift cluster (BYO) ++ +We distinguish between the provisioning and configuration requirements of the initial cluster ("`Patterns`"), and of clusters/machines managed by the initial cluster (see "`Managed clusters`") + +. Patterns *MUST* use a standardized https://github.com/hybrid-cloud-patterns/common/tree/main/clustergroup[clustergroup] Helm chart, as the initial OpenShift GitOps application that describes all namespaces, subscriptions, and any other GitOps applications which contain the configuration elements that make up the solution. +. Managed clusters *MUST* operate on the premise of "`eventual consistency`" (automatic retries, and an expectation of idempotence), which is one of the essential benefits of the GitOps model. +. Imperative elements *MUST* be implemented as idempotent code stored in Git + +[id="should"] +=== Should + +. Patterns SHOULD include sample application(s) to demonstrate the business problem(s) addressed by the pattern. +. Patterns SHOULD try to indicate which parts are foundational as opposed to being for demonstration purposes. +. Patterns SHOULD use the VP operator to deploy patterns. However anything that creates the OpenShift GitOps subscription and initial clustergroup application could be acceptable. +. Patterns SHOULD embody the "`open hybrid cloud model`" unless there is a compelling reason to limit the availability of functionality to a specific platform or topology. +. Patterns SHOULD use industry standards and Red Hat products for all required tooling ++ +Patterns prefer current best practices at the time of pattern development. Solutions that do not conform to best practices should expect to justify non-conformance and/or expend engineering effort to conform. + +. Patterns SHOULD NOT make use of upstream/community operators and images except, depending on the market segment, where critical to the overall solution. ++ +Such operators are forbidden to be deployed into an increasing number of customer environments, which limits reuse. +Alternatives include productizing the operator, and building it in-cluster from trusted sources as part of the pattern. + +. Patterns SHOULD be decomposed into modules that perform a specific function, so that they can be reused in other patterns. ++ +For example, Bucket Notification is a capability in the Medical Diagnosis pattern that could be used for other solutions. + +. Patterns SHOULD use Ansible Automation Platform to drive the declarative provisioning and management of managed hosts (e.g. RHEL). See also "`Imperative elements`". +. Patterns SHOULD use RHACM to manage policy and compliance on any managed clusters. +. Patterns SHOULD use RHACM and a https://github.com/hybrid-cloud-patterns/common/tree/main/acm[standardized acm chart] to deploy and configure OpenShift GitOps to managed clusters. +. Managed clusters SHOULD be loosely coupled to their hub, and use OpenShift GitOps to consume applications and configuration directly from Git as opposed to having hard dependencies on a centralized cluster. +. Managed clusters SHOULD use the "`pull`" deployment model for obtaining their configuration. +. Imperative elements SHOULD be implemented as Ansible playbooks +. Imperative elements SHOULD be driven declaratively -- by which we mean that the playbooks should be triggered by Jobs and/or CronJobs stored in Git and delivered by OpenShift GitOps. + +[id="can"] +=== Can + +. Patterns CAN include additional configuration and/or demo elements located in one or more additional private git repos. +. Patterns CAN include automation that deploys a known set of clusters and/or machines in a specific topology +. Patterns CAN limit functionality/testing claims to specific platforms, topologies, and cluster/node sizes +. Patterns CAN consume operators from established partners (e.g. Hashicorp Vault, and Seldon) +. Patterns CAN include managed clusters +. Patterns CAN include details or automation for provisioning managed clusters, or rely on the admin to pre-provision them out-of-band. +. Patterns CAN also choose to model multi-cluster solutions as an uncoordinated collection of "`initial hub clusters`" +. Imperative elements CAN interact with cluster state and/or external influences diff --git a/content/learn/implementation.md b/content/learn/implementation.md deleted file mode 100644 index 2a02d9187..000000000 --- a/content/learn/implementation.md +++ /dev/null @@ -1,68 +0,0 @@ ---- -menu: - learn: - parent: Workflow -title: Implementation Requirements -weight: 41 -aliases: /requirements/implementation/ ---- - -## Technical Requirements - -Additional requirements specific to the implementation for all Community, and Validated patterns - -### Must - -1. Patterns **MUST** include one or more Git repositories, in a publicly accessible location, containing configuration elements that can be consumed by the OpenShift GitOps operator (ArgoCD) without supplying custom ArgoCD images. -1. Patterns **MUST** be useful without all content stored in private git repos -1. Patterns **MUST** include a list of names and versions of all the products and projects being consumed by the pattern -1. Patterns **MUST** be useful without any sample applications that are private or lack public sources. - - Patterns must not become useless due to bit rot or opaque incompatibilities in closed source “applications”. - -1. Patterns **MUST NOT** store sensitive data elements, including but not limited to passwords, in Git -1. Patterns **MUST** be possible to deploy on any IPI-based OpenShift cluster (BYO) - - We distinguish between the provisioning and configuration requirements of the initial cluster (“Patterns”), and of clusters/machines managed by the initial cluster (see “Managed clusters”) - -1. Patterns **MUST** use a standardized [clustergroup](https://github.com/hybrid-cloud-patterns/common/tree/main/clustergroup) Helm chart, as the initial OpenShift GitOps application that describes all namespaces, subscriptions, and any other GitOps applications which contain the configuration elements that make up the solution. -1. Managed clusters **MUST** operate on the premise of “eventual consistency” (automatic retries, and an expectation of idempotence), which is one of the essential benefits of the GitOps model. -1. Imperative elements **MUST** be implemented as idempotent code stored in Git - -### Should - -1. Patterns SHOULD include sample application(s) to demonstrate the business problem(s) addressed by the pattern. -1. Patterns SHOULD try to indicate which parts are foundational as opposed to being for demonstration purposes. -1. Patterns SHOULD use the VP operator to deploy patterns. However anything that creates the OpenShift GitOps subscription and initial clustergroup application could be acceptable. -1. Patterns SHOULD embody the “open hybrid cloud model” unless there is a compelling reason to limit the availability of functionality to a specific platform or topology. -1. Patterns SHOULD use industry standards and Red Hat products for all required tooling - - Patterns prefer current best practices at the time of pattern development. Solutions that do not conform to best practices should expect to justify non-conformance and/or expend engineering effort to conform. - -1. Patterns SHOULD NOT make use of upstream/community operators and images except, depending on the market segment, where critical to the overall solution. - - Such operators are forbidden to be deployed into an increasing number of customer environments, which limits reuse. -Alternatives include productizing the operator, and building it in-cluster from trusted sources as part of the pattern. - -1. Patterns SHOULD be decomposed into modules that perform a specific function, so that they can be reused in other patterns. - - For example, Bucket Notification is a capability in the Medical Diagnosis pattern that could be used for other solutions. - -1. Patterns SHOULD use Ansible Automation Platform to drive the declarative provisioning and management of managed hosts (e.g. RHEL). See also “Imperative elements”. -1. Patterns SHOULD use RHACM to manage policy and compliance on any managed clusters. -1. Patterns SHOULD use RHACM and a [standardized acm chart](https://github.com/hybrid-cloud-patterns/common/tree/main/acm) to deploy and configure OpenShift GitOps to managed clusters. -1. Managed clusters SHOULD be loosely coupled to their hub, and use OpenShift GitOps to consume applications and configuration directly from Git as opposed to having hard dependencies on a centralized cluster. -1. Managed clusters SHOULD use the “pull” deployment model for obtaining their configuration. -1. Imperative elements SHOULD be implemented as Ansible playbooks -1. Imperative elements SHOULD be driven declaratively – by which we mean that the playbooks should be triggered by Jobs and/or CronJobs stored in Git and delivered by OpenShift GitOps. - -### Can - -1. Patterns CAN include additional configuration and/or demo elements located in one or more additional private git repos. -1. Patterns CAN include automation that deploys a known set of clusters and/or machines in a specific topology -1. Patterns CAN limit functionality/testing claims to specific platforms, topologies, and cluster/node sizes -1. Patterns CAN consume operators from established partners (e.g. Hashicorp Vault, and Seldon) -1. Patterns CAN include managed clusters -1. Patterns CAN include details or automation for provisioning managed clusters, or rely on the admin to pre-provision them out-of-band. -1. Patterns CAN also choose to model multi-cluster solutions as an uncoordinated collection of “initial hub clusters” -1. Imperative elements CAN interact with cluster state and/or external influences diff --git a/content/learn/importing-a-cluster.adoc b/content/learn/importing-a-cluster.adoc index f455e80f9..b38f251d9 100644 --- a/content/learn/importing-a-cluster.adoc +++ b/content/learn/importing-a-cluster.adoc @@ -9,17 +9,16 @@ aliases: /learn/importing-a-cluster/ :toc: -[id="importing-a-cluster"] - +[id="importing-a-cluster"] == Importing a managed cluster -Many validated patterns require importing a cluster into a managed group. These groups have specific application sets that will be deployed and managed. Some examples are factory clusters in the Industrial Edge pattern, or development clusters in Multi-cluster DevSecOps pattern. +Many validated patterns require importing a cluster into a managed group. These groups have specific application sets that will be deployed and managed. Some examples are factory clusters in the Industrial Edge pattern, or development clusters in Multi-cluster DevSecOps pattern. Red Hat Advanced Cluster Management (RHACM) can be used to create a cluster of a specific cluster group type. You can deploy a specific cluster that way if you have RHACM set up with credentials for deploying clusters. However in many cases an OpenShift cluster has already been created and will be imported into the set of clusters that RHACM is managing. While you can create and deploy in this manner this section concentrates on importing an existing cluster and designating a specific managed cluster group type. -To deploy a cluster that can be imported into RHACM, use the `openshift-install` program provided at https://console.redhat.com/openshift/create[console.redhat.com]. You will need login credentials. +To deploy a cluster that can be imported into RHACM, use the `openshift-install` program provided at https://console.redhat.com/openshift/create[console.redhat.com]. You will need login credentials. == Importing a cluster using the RHACM User Interface @@ -39,7 +38,7 @@ Using this method, you are done. Skip to the section in your pattern documentati == Other potential import tools -There are a two other known ways to join a cluster to the RHACM hub. These methods are not supported but have been tested once. The patterns team no longer tests these methods. If these methods become supported we will maintain the documentation here. +There are a two other known ways to join a cluster to the RHACM hub. These methods are not supported but have been tested once. The patterns team no longer tests these methods. If these methods become supported we will maintain the documentation here. * Using the link:https://github.com/stolostron/cm-cli[cm-cli] tool * Using the link:https://github.com/open-cluster-management-io/clusteradm[clusteradm] tool @@ -81,7 +80,7 @@ oc login ---- + -or +or + [source,terminal] diff --git a/content/learn/infrastructure.md b/content/learn/infrastructure.adoc similarity index 59% rename from content/learn/infrastructure.md rename to content/learn/infrastructure.adoc index ba74ea30e..da8405840 100644 --- a/content/learn/infrastructure.md +++ b/content/learn/infrastructure.adoc @@ -5,12 +5,14 @@ weight: 30 aliases: /infrastructure/ --- -# Infrastructure +:toc: -## Background +[id="background"] +== Background -Each validated pattern has infrastructure requirements. The majority of the validated patterns will run Red Hat OpenShift while some parts will run directly on Red Hat Enterprise Linux or (RHEL), more likely, a version of RHEL called RHEL for Edge. It is expected that consumers of validated patterns already have the infrastructure in place using existing reliable and supported deployment tools. For more information and tools head over to [console.redhat.com](https://console.redhat.com/) +Each validated pattern has infrastructure requirements. The majority of the validated patterns will run Red Hat OpenShift while some parts will run directly on Red Hat Enterprise Linux or (RHEL), more likely, a version of RHEL called RHEL for Edge. It is expected that consumers of validated patterns already have the infrastructure in place using existing reliable and supported deployment tools. For more information and tools head over to https://console.redhat.com/[console.redhat.com] -## Sizing +[id="sizing"] +== Sizing -In this section we provide general minimum sizing requirements for such infrastructure but it is important to review specific requirements for a specific validated pattern. For example, [Industrial Edge 2.0](/industrial-edge/) employs AI/Ml technology that requires large machine instances to support the applications deployed on OpenShift at the datacenter. +In this section we provide general minimum sizing requirements for such infrastructure but it is important to review specific requirements for a specific validated pattern. For example, link:/industrial-edge/[Industrial Edge 2.0] employs AI/Ml technology that requires large machine instances to support the applications deployed on OpenShift at the datacenter. diff --git a/content/learn/ocp-cluster-general-sizing.md b/content/learn/ocp-cluster-general-sizing.adoc similarity index 68% rename from content/learn/ocp-cluster-general-sizing.md rename to content/learn/ocp-cluster-general-sizing.adoc index eaa6be2aa..41af31e4f 100644 --- a/content/learn/ocp-cluster-general-sizing.md +++ b/content/learn/ocp-cluster-general-sizing.adoc @@ -7,59 +7,78 @@ weight: 31 aliases: /infrastructure/ocp-cluster-general-sizing/ --- -# OpenShift General Sizing +:toc: -## Recommended node host practices += OpenShift General Sizing -The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: `podsPerCore` and `maxPods`. +[id="recommended-node-host-practices"] +== Recommended node host practices + +The OpenShift Container Platform node configuration file contains important options. For example, two parameters control the maximum number of pods that can be scheduled to a node: `podsPerCore` and `maxPods`. When both options are in use, the lower of the two values limits the number of pods on a node. Exceeding these values can result in: -- Increased CPU utilization. -- Slow pod scheduling. -- Potential out-of-memory scenarios, depending on the amount of memory in the node. -- Exhausting the pool of IP addresses. -- Resource overcommitting, leading to poor user application performance. +* Increased CPU utilization. +* Slow pod scheduling. +* Potential out-of-memory scenarios, depending on the amount of memory in the node. +* Exhausting the pool of IP addresses. +* Resource overcommitting, leading to poor user application performance. In Kubernetes, a pod that is holding a single container actually uses two containers. The second container is used to set up networking prior to the actual container starting. Therefore, a system running 10 pods will actually have 20 containers running. -**podsPerCore** sets the number of pods the node can run based on the number of processor cores on the node. For example, if **podsPerCore** is set to `10` on a node with 4 processor cores, the maximum number of pods allowed on the node will be `40`. +*podsPerCore* sets the number of pods the node can run based on the number of processor cores on the node. For example, if *podsPerCore* is set to `10` on a node with 4 processor cores, the maximum number of pods allowed on the node will be `40`. -```yaml +[source,yaml] +---- kubeletConfig: podsPerCore: 10 -``` +---- -Setting **podsPerCore** to `0` disables this limit. The default is `0`. **podsPerCore** cannot exceed `maxPods`. +Setting *podsPerCore* to `0` disables this limit. The default is `0`. *podsPerCore* cannot exceed `maxPods`. -**maxPods** sets the number of pods the node can run to a fixed value, regardless of the properties of the node. +*maxPods* sets the number of pods the node can run to a fixed value, regardless of the properties of the node. -```yaml +[source,yaml] +---- kubeletConfig: maxPods: 250 -``` +---- -For more information about sizing and Red Hat standard host practices see the [Official OpenShift Documentation Page](https://docs.openshift.com/container-platform/4.8/scalability_and_performance/recommended-host-practices.html) for recommended host practices. +For more information about sizing and Red Hat standard host practices see the https://docs.openshift.com/container-platform/4.8/scalability_and_performance/recommended-host-practices.html[Official OpenShift Documentation Page] for recommended host practices. -## Control plane node sizing +[id="control-plane-node-sizing"] +== Control plane node sizing The control plane node resource requirements depend on the number of nodes in the cluster. The following control plane node size recommendations are based on the results of control plane density focused testing. The control plane tests create the following objects across the cluster in each of the namespaces depending on the node counts: -- 12 image streams -- 3 build configurations -- 6 builds -- 1 deployment with 2 pod replicas mounting two secrets each -- 2 deployments with 1 pod replica mounting two secrets -- 3 services pointing to the previous deployments -- 3 routes pointing to the previous deployments -- 10 secrets, 2 of which are mounted by the previous deployments -- 10 config maps, 2 of which are mounted by the previous deployments - -| Number of worker nodes | Cluster load (namespaces) | CPU cores | Memory (GB) -| :-------- | :---------- | :------------ | :------------- | -| 25 | 500 | 4 | 16 -| 100 | 1000 | 8 | 32 -| 250 | 4000 | 16 | 96 +* 12 image streams +* 3 build configurations +* 6 builds +* 1 deployment with 2 pod replicas mounting two secrets each +* 2 deployments with 1 pod replica mounting two secrets +* 3 services pointing to the previous deployments +* 3 routes pointing to the previous deployments +* 10 secrets, 2 of which are mounted by the previous deployments +* 10 config maps, 2 of which are mounted by the previous deployments + +|=== +| Number of worker nodes | Cluster load (namespaces) | CPU cores | Memory (GB) + +| 25 +| 500 +| 4 +| 16 + +| 100 +| 1000 +| 8 +| 32 + +| 250 +| 4000 +| 16 +| 96 +|=== On a cluster with three masters or control plane nodes, the CPU and memory usage will spike up when one of the nodes is stopped, rebooted or fails because the remaining two nodes must handle the load in order to be highly available. This is also expected during upgrades because the masters are cordoned, drained, and rebooted serially to apply the operating system updates, as well as the control plane Operators update. To avoid cascading failures on large and dense clusters, keep the overall resource usage on the master nodes to at most half of all available capacity to handle the resource usage spikes. Increase the CPU and memory on the master nodes accordingly. @@ -71,9 +90,10 @@ The recommendations are based on the data points captured on OpenShift Container In OpenShift Container Platform 4.5, half of a CPU core (500 millicore) is now reserved by the system by default compared to OpenShift Container Platform 3.11 and previous versions. The sizes are determined taking that into consideration. -For more information about sizing and Red Hat standard host practices see the [Official OpenShift Documentation Page](https://docs.openshift.com/container-platform/4.8/scalability_and_performance/recommended-host-practices.html) for recommended host practices. +For more information about sizing and Red Hat standard host practices see the https://docs.openshift.com/container-platform/4.8/scalability_and_performance/recommended-host-practices.html[Official OpenShift Documentation Page] for recommended host practices. -## Recommended etcd practices +[id="recommended-etcd-practices"] +== Recommended etcd practices For large and dense clusters, etcd can suffer from poor performance if the keyspace grows excessively large and exceeds the space quota. Periodic maintenance of etcd, including defragmentation, must be performed to free up space in the data store. It is highly recommended that you monitor Prometheus for etcd metrics and defragment it when required before etcd raises a cluster-wide alarm that puts the cluster into a maintenance mode, which only accepts key reads and deletes. Some of the key metrics to monitor are `etcd_server_quota_backend_bytes` which is the current quota limit, `etcd_mvcc_db_total_size_in_use_in_bytes` which indicates the actual database usage after a history compaction, and `etcd_debugging_mvcc_db_total_size_in_bytes` which shows the database size including free space waiting for defragmentation. Instructions on defragging etcd can be found in the `Defragmenting etcd data` section. @@ -81,4 +101,4 @@ Etcd writes data to disk, so its performance strongly depends on disk performanc Some of the key metrics to monitor on a deployed OpenShift Container Platform cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics. `etcd_disk_wal_fsync_duration_seconds_bucket` reports the etcd disk fsync duration, `etcd_server_leader_changes_seen_total` reports the leader changes. To rule out a slow disk and confirm that the disk is reasonably fast, 99th percentile of the `etcd_disk_wal_fsync_duration_seconds_bucket` should be less than 10ms. -For more information about sizing and Red Hat standard host practices see the [Official OpenShift Documentation Page](https://docs.openshift.com/container-platform/4.8/scalability_and_performance/recommended-host-practices.html) for recommended host practices. +For more information about sizing and Red Hat standard host practices see the https://docs.openshift.com/container-platform/4.8/scalability_and_performance/recommended-host-practices.html[Official OpenShift Documentation Page] for recommended host practices. diff --git a/content/learn/rhel-for-edge-general-sizing.md b/content/learn/rhel-for-edge-general-sizing.adoc similarity index 58% rename from content/learn/rhel-for-edge-general-sizing.md rename to content/learn/rhel-for-edge-general-sizing.adoc index e46d44cfa..59ceeaa81 100644 --- a/content/learn/rhel-for-edge-general-sizing.md +++ b/content/learn/rhel-for-edge-general-sizing.adoc @@ -7,8 +7,11 @@ weight: 32 aliases: /infrastructure/rhel-for-edge-general-sizing/ --- -# RHEL for Edge General Sizing +:toc: -## Recommended node host practices += RHEL for Edge General Sizing + +[id="recommended-node-host-practices"] +== Recommended node host practices TBD diff --git a/content/learn/secrets.md b/content/learn/secrets.adoc similarity index 67% rename from content/learn/secrets.md rename to content/learn/secrets.adoc index 6be2c82d2..d72a00862 100644 --- a/content/learn/secrets.md +++ b/content/learn/secrets.adoc @@ -5,29 +5,34 @@ weight: 60 aliases: /secrets/ --- -# Infrastructure +:toc: -## Background += Infrastructure + +[id="background"] +== Background Enterprise applications require security, especially in multi-cluster and multi-site environments. Applications require trust and use certificates and other secrets in order to establish and maintain trust. In this section we will look at various ways of managing secrets. When you start developing distributed enterprise applications there is a strong temptation to ignore security during development and add it at the end. This is proven to be a very bad practice that accumulates technical debt that sometimes never gets resolved. -While the DevOps model of development strongly encourages *shifting security to the left* many developers didn't really take notice and so the more explicit term DevSecOps was created. Essentially, "pay attention and consider and implement security as early as possible in the lifecycle". (i.e. shift left on the time line). +While the DevOps model of development strongly encourages _shifting security to the left_ many developers didn't really take notice and so the more explicit term DevSecOps was created. Essentially, "pay attention and consider and implement security as early as possible in the lifecycle". (i.e. shift left on the time line). -## Secret Management +[id="secret-management"] +== Secret Management One area that has been impacted by a more automated approach to security is in the secret management. DevOps (and DevSecOps) environments require the use of many different services: -1. Code repositories -1. GitOps tools -1. Image repositories -1. Build pipelines +. Code repositories +. GitOps tools +. Image repositories +. Build pipelines All of these services require credentials. (Or should do!) And keeping those credentials secret is very important. E.g. pushing your credentials to your personal GitHub/GitLab repository is not a secure solution. -While using a file based secret management can work if done correctly, most organizations opt for a more enterprise solution using a secret management product or project. The Cloud Native Computing Foundation (CNCF) has many such [projects](https://radar.cncf.io/2021-02-secrets-management). The Hybrid Cloud Patterns project has started with [Hashicorp Vault](https://github.com/hashicorp/vault) secret management product but we look forward to other project contributions. +While using a file based secret management can work if done correctly, most organizations opt for a more enterprise solution using a secret management product or project. The Cloud Native Computing Foundation (CNCF) has many such https://radar.cncf.io/2021-02-secrets-management[projects]. The Hybrid Cloud Patterns project has started with https://github.com/hashicorp/vault[Hashicorp Vault] secret management product but we look forward to other project contributions. -## What's next? +[id="whats-next"] +== What's next? -[Getting started with Vault](/secrets/vault/) +link:/secrets/vault/[Getting started with Vault] diff --git a/content/learn/validated.adoc b/content/learn/validated.adoc new file mode 100644 index 000000000..8e60f5318 --- /dev/null +++ b/content/learn/validated.adoc @@ -0,0 +1,123 @@ +--- +menu: + learn: + parent: Workflow +title: Validated Patterns +weight: 43 +aliases: /requirements/validated/ +--- + +:toc: + += Validated Pattern Requirements + +[id="tldr"] +== tl;dr + +* *What are they:* Technical foundations, backed by CI, that have succeeded in the market and Red Hat expects to be repeatable across customers and segments. +* *Purpose:* Reduce risk, accelerate sales cycles, and allow consulting organizations to be more effective. +* *Creator:* The Validated Patterns team in conjunction with: Partners, GSIs, Services/Consultants, SAs, and other Red Hat teams + +[id="onboarding-existing-implementations"] +== Onboarding Existing Implementations + +The Validated Patterns team has a preference for empowering other, and not +taking credit for their work. + +Where there is an existing application/demo, there is also a strong preference +for the originating team to own any changes that are needed for the +implementation to become a validated pattern. Alternatively, if the Validated +Patterns team drives the conversion, then in order to prevent confusion and +duplicated efforts, we are likely to ask for a commitment to phase out use of +the previous implementation for future engagements such as demos, presentations, +and workshops. + +The goal is to avoid bringing a parallel implementation into existence which +divides Red Hat resources, and creates confusion internally and with customers +as the implementations drift apart. + +In both scenarios the originating team can choose where to host the primary +repository, will be given admin permissions to any fork in +https://github.com/hybrid-cloud-patterns, +and will receive on-going assistance from the Validated Patterns team. + +[id="nominating-a-community-pattern-to-become-validated"] +=== Nominating a Community Pattern to become Validated + +If there is a community pattern that you believe would be a good candidate for +becoming validated, please email hybrid-cloud-patterns@googlegroups.com at least +4 weeks prior to the end of a given quarter in order for the necessary work to be +considered as part of the following quarter's planning process. + +Please be aware that each Validated Pattern represents an ongoing maintenance, support, +and CI effort. Finite team capacity means we must critically balance this cost against +the potential customer opportunity. A "`no`" or "`not yet`" result is not intended as an +insult against the pattern or its author. + +[id="requirements"] +== Requirements + +Validated Patterns have deliverable and requirements in addition to those +specified for link:/requirements/community/[Community-level] patterns + +[id="must"] +=== Must + +. Validated Patterns *MUST* contain more than two RH products. Alternative: Engage with the BU +. Validated Patterns, or the solution on which they are based, *MUST* have been deployed and approved for use in at least one customer environment. ++ +Alternative: link:/requirements/community[Community Pattern] + +. Validated Patterns *MUST* be meaningful without specialized hardware, including flavors of architectures not explicitly supported. Alternative: Engage with DCI ++ +Qualification is a Validated Patterns engineering decision with input from the pattern owner. + +. Validated Patterns *MUST* be broadly applicable. Alternative: Engage with Phased Gate and/or TAMs ++ +Qualification is a Validated Patterns PM decision with input from the pattern owner. + +. Validated Patterns *MUST* only make use of Red Hat products that are already fully supported by their product team(s). +. Validated Patterns *MUST NOT* rely on functionality in tech-preview, or hidden behind feature gates. +. Validated Patterns *MUST* conform to the same Community-level link:/requirements/implementation/[implementation requirements] +. Validated Patterns *MUST* have their architectures reviewed by the PM, TPM, or TMM of each Red Hat product they consume to ensure consistency with the product teams`' intentions and roadmaps +. Validated Patterns *MUST* have their implementation reviewed by the patterns team to ensure that it is sufficiently flexible to function across a variety of platforms, customer environments, and any relevant verticals. +. Validated Patterns *MUST* include a standardized architecture drawing, created with (or at least conforming to) the PAC tooling +. Validated Patterns *MUST* include a presentation deck oriented around the business problem being solved and intended for use by the field to sell and promote the solution +. Validated Patterns *MUST* include a recorded demo highlighting the business problem and how the pattern solves it +. Validated Patterns *MUST* include a test plan covering all features or attributes being highlighted by the demo that also spans multiple products. Negative flow tests (such as resiliency or data retention in the presence of network outages) are limited to scenarios covered by the demonstration script. +. Validated Patterns *MUST* include automated CI testing that runs on every change to the pattern, or a schedule no less frequently than once per week +. Validated Patterns *MUST* create a new point release of the validation-level deliverables when minor versions (e.g. "`12`" in OpenShift 4.12) of consumed products change +. Validated Patterns *MUST* document their support policy ++ +The individual products used in a Validated Pattern are backed by the full Red Hat support experience conditional on the customer's subscription to those products, and the individual products`' support policy. +Additional components in a Validated Pattern that are not supported by Red Hat (e.g. Hashicorp Vault, and Seldon Core) will require a customer to obtain support from that vendor directly. +The validated patterns team is very motivated to address any problems in the VP Operator, as well as problems in the common helm charts, but cannot not offer any SLAs at this time. +See also our standard disclaimer + +. Validated Patterns *DO NOT* imply an obligation of support for partner or community operators by Red Hat. + +[id="should"] +=== Should + +. Validated Patterns SHOULD focus on functionality not performance. +. Validated Patterns SHOULD trigger CI runs for new versions of consumed products +. Validated Patterns SHOULD provide an RHPDS lab environment ++ +A bare bones environment into which the solution can be deployed, and a list of instructions for doing so (e.g. installing and configuring OpenShift GitOps) + +. Validated Patterns SHOULD provide pre-built demo environment using RHPDS ++ +Having an automated demo within the RHPDS system, that will be built based on the current stable version that is run against the CI testing system + +. Validated Patterns SHOULD track deployments of each validation-level deliverable ++ +For lifecycle decisions like discontinuing support of a version +For notification if problems are found in our CI + +[id="can"] +=== Can + +. Teams creating Validated Patterns CAN provide their own SLA ++ +A document for QE that defines, at a technical level, how to validate if the pattern has been successfully deployed and is functionally operational. +Example: https://docs.google.com/document/d/12KQhdzjVIsxRURTnWAckiEMB3_96oWBjtlTXi1q73cg/view[Validating an Industrial Edge Deployment] diff --git a/content/learn/validated.md b/content/learn/validated.md deleted file mode 100644 index 8760f02d8..000000000 --- a/content/learn/validated.md +++ /dev/null @@ -1,114 +0,0 @@ ---- -menu: - learn: - parent: Workflow -title: Validated Patterns -weight: 43 -aliases: /requirements/validated/ ---- - -# Validated Pattern Requirements - -## tl;dr - -* **What are they:** Technical foundations, backed by CI, that have succeeded in the market and Red Hat expects to be repeatable across customers and segments. -* **Purpose:** Reduce risk, accelerate sales cycles, and allow consulting organizations to be more effective. -* **Creator:** The Validated Patterns team in conjunction with: Partners, GSIs, Services/Consultants, SAs, and other Red Hat teams - -## Onboarding Existing Implementations - -The Validated Patterns team has a preference for empowering other, and not -taking credit for their work. - -Where there is an existing application/demo, there is also a strong preference -for the originating team to own any changes that are needed for the -implementation to become a validated pattern. Alternatively, if the Validated -Patterns team drives the conversion, then in order to prevent confusion and -duplicated efforts, we are likely to ask for a commitment to phase out use of -the previous implementation for future engagements such as demos, presentations, -and workshops. - -The goal is to avoid bringing a parallel implementation into existence which -divides Red Hat resources, and creates confusion internally and with customers -as the implementations drift apart. - -In both scenarios the originating team can choose where to host the primary -repository, will be given admin permissions to any fork in -[https://github.com/hybrid-cloud-patterns](https://github.com/hybrid-cloud-patterns), -and will receive on-going assistance from the Validated Patterns team. - -### Nominating a Community Pattern to become Validated - -If there is a community pattern that you believe would be a good candidate for -becoming validated, please email hybrid-cloud-patterns@googlegroups.com at least -4 weeks prior to the end of a given quarter in order for the necessary work to be -considered as part of the following quarter’s planning process. - -Please be aware that each Validated Pattern represents an ongoing maintenance, support, -and CI effort. Finite team capacity means we must critically balance this cost against -the potential customer opportunity. A “no” or “not yet” result is not intended as an -insult against the pattern or its author. - -## Requirements - -Validated Patterns have deliverable and requirements in addition to those -specified for [Community-level](/requirements/community/) patterns - -### Must - -1. Validated Patterns **MUST** contain more than two RH products. Alternative: Engage with the BU -1. Validated Patterns, or the solution on which they are based, **MUST** have been deployed and approved for use in at least one customer environment. - - Alternative: [Community Pattern](/requirements/community) - -1. Validated Patterns **MUST** be meaningful without specialized hardware, including flavors of architectures not explicitly supported. Alternative: Engage with DCI - - Qualification is a Validated Patterns engineering decision with input from the pattern owner. - -1. Validated Patterns **MUST** be broadly applicable. Alternative: Engage with Phased Gate and/or TAMs - - Qualification is a Validated Patterns PM decision with input from the pattern owner. - -1. Validated Patterns **MUST** only make use of Red Hat products that are already fully supported by their product team(s). -1. Validated Patterns **MUST NOT** rely on functionality in tech-preview, or hidden behind feature gates. -1. Validated Patterns **MUST** conform to the same Community-level [implementation requirements](/requirements/implementation/) -1. Validated Patterns **MUST** have their architectures reviewed by the PM, TPM, or TMM of each Red Hat product they consume to ensure consistency with the product teams’ intentions and roadmaps -1. Validated Patterns **MUST** have their implementation reviewed by the patterns team to ensure that it is sufficiently flexible to function across a variety of platforms, customer environments, and any relevant verticals. -1. Validated Patterns **MUST** include a standardized architecture drawing, created with (or at least conforming to) the PAC tooling -1. Validated Patterns **MUST** include a presentation deck oriented around the business problem being solved and intended for use by the field to sell and promote the solution -1. Validated Patterns **MUST** include a recorded demo highlighting the business problem and how the pattern solves it -1. Validated Patterns **MUST** include a test plan covering all features or attributes being highlighted by the demo that also spans multiple products. Negative flow tests (such as resiliency or data retention in the presence of network outages) are limited to scenarios covered by the demonstration script. -1. Validated Patterns **MUST** include automated CI testing that runs on every change to the pattern, or a schedule no less frequently than once per week -1. Validated Patterns **MUST** create a new point release of the validation-level deliverables when minor versions (e.g. “12” in OpenShift 4.12) of consumed products change -1. Validated Patterns **MUST** document their support policy - - The individual products used in a Validated Pattern are backed by the full Red Hat support experience conditional on the customer’s subscription to those products, and the individual products’ support policy. - Additional components in a Validated Pattern that are not supported by Red Hat (e.g. Hashicorp Vault, and Seldon Core) will require a customer to obtain support from that vendor directly. - The validated patterns team is very motivated to address any problems in the VP Operator, as well as problems in the common helm charts, but cannot not offer any SLAs at this time. - See also our standard disclaimer - -1. Validated Patterns **DO NOT** imply an obligation of support for partner or community operators by Red Hat. - -### Should - -1. Validated Patterns SHOULD focus on functionality not performance. -1. Validated Patterns SHOULD trigger CI runs for new versions of consumed products -1. Validated Patterns SHOULD provide an RHPDS lab environment - - A bare bones environment into which the solution can be deployed, and a list of instructions for doing so (e.g. installing and configuring OpenShift GitOps) - -1. Validated Patterns SHOULD provide pre-built demo environment using RHPDS - - Having an automated demo within the RHPDS system, that will be built based on the current stable version that is run against the CI testing system - -1. Validated Patterns SHOULD track deployments of each validation-level deliverable - - For lifecycle decisions like discontinuing support of a version - For notification if problems are found in our CI - -### Can - -1. Teams creating Validated Patterns CAN provide their own SLA - - A document for QE that defines, at a technical level, how to validate if the pattern has been successfully deployed and is functionally operational. - Example: [Validating an Industrial Edge Deployment](https://docs.google.com/document/d/12KQhdzjVIsxRURTnWAckiEMB3_96oWBjtlTXi1q73cg/view) diff --git a/content/learn/vault.md b/content/learn/vault.adoc similarity index 66% rename from content/learn/vault.md rename to content/learn/vault.adoc index 26183c91d..8d6e66aaa 100644 --- a/content/learn/vault.md +++ b/content/learn/vault.adoc @@ -7,26 +7,31 @@ weight: 61 aliases: /secrets/vault/ --- -# Deploying HashiCorp Vault in a validated pattern +:toc: -# Prerequisites += Deploying HashiCorp Vault in a validated pattern + +[id="prerequisites"] += Prerequisites You have deployed/installed a validated pattern using the instructions provided for that pattern. This should include setting having logged into the cluster using `oc login` or setting you `KUBECONFIG` environment variable and running a `make install`. -# Setting up HashiCorp Vault +[id="setting-up-hashicorp-vault"] += Setting up HashiCorp Vault Any validated pattern that uses HashiCorp Vault already has deployed Vault as part of the `make install`. To verify that Vault is installed you can first see that the `vault` project exists and then select the Workloads/Pods: -[![Vault Pods](/images/secrets/vault-pods.png)](/images/secrets/vault-pods.png) +image:/images/secrets/vault-pods.png[link="/images/secrets/vault-pods.png"] In order to setup HashiCorp Vault there are two different ways, both of which happen automatically as part of the `make install` command: -1. Inside the cluster directly when the helm value `clusterGroup.insecureUnsealVaultInsideCluster` is set to `true`. With this method a cronjob will run every five minutes inside the `imperative` namespace and unseal, initialize and configure the vault. The vault's unseal keys and root token will be stored inside a secret called `vaultkeys` in the `imperative` namespace. **It is considered best practice** to copy the content of that secret offline, store it securely and then delete it. -2. On the user's computer when the helm value `clusterGroup.insecureUnsealVaultInsideCluster` is set to `false`. This will store the json containing containing both vault root token and unseal keys inside a file called `common/pattern-vault.init`. It is recommended to encrypt this file or store it securely. +. Inside the cluster directly when the helm value `clusterGroup.insecureUnsealVaultInsideCluster` is set to `true`. With this method a cronjob will run every five minutes inside the `imperative` namespace and unseal, initialize and configure the vault. The vault's unseal keys and root token will be stored inside a secret called `vaultkeys` in the `imperative` namespace. *It is considered best practice* to copy the content of that secret offline, store it securely and then delete it. +. On the user's computer when the helm value `clusterGroup.insecureUnsealVaultInsideCluster` is set to `false`. This will store the json containing containing both vault root token and unseal keys inside a file called `common/pattern-vault.init`. It is recommended to encrypt this file or store it securely. An example output is the following: -```json +[source,json] +---- { "recovery_keys_b64": [], "recovery_keys_hex": [], @@ -50,24 +55,27 @@ An example output is the following: "unseal_shares": 5, "unseal_threshold": 3 } -``` +---- The vault's root token is needed to log into the vault's UI and the unseal keys are needed whenever the vault pods are restarted. In the OpenShift console click on the nine box at the top and click on the `vault` line: -[![Vault Nine Box](/images/secrets/vault-nine-box.png)] + +image:/images/secrets/vault-nine-box.png[Vault Nine Box] Copy the `root_token` field which in the example above has the value `hvs.VNFq7yPuZljq2VDJTkgAMs2Z` and paste it in the sign-in page: -[![Vault Sign In](/images/secrets/vault-signin.png)](/images/secrets/vault-signin.png) +link:/images/secrets/vault-signin.png[image:/images/secrets/vault-signin.png[Vault Sign In\]] After signing in you will see the secrets that have been created. -[![Vault Secrets Engine](/images/secrets/vault-secrets-engine-screen.png)](/images/secrets/vault-secrets-engine-screen.png) +link:/images/secrets/vault-secrets-engine-screen.png[image:/images/secrets/vault-secrets-engine-screen.png[Vault Secrets Engine\]] -# Unseal +[id="unseal"] += Unseal If you don't see the sign in page but instead see an unseal page, something may have happened the cluster and you need to unseal it again. Instead of using `make vault-init` you should run `make vault-unseal`. You can also unseal it manually by running `vault operator unseal` inside the `vault-0` pod in the `vault` namespace. -# What's next? +[id="whats-next"] += What's next? Check with the validated pattern instructions to see if there are further steps you need to perform. Sometimes this might be deploying a pattern on an edge cluster and checking to see if the correct Vault handshaking and updating occurs. diff --git a/content/learn/workflow.md b/content/learn/workflow.adoc similarity index 60% rename from content/learn/workflow.md rename to content/learn/workflow.adoc index 688131e06..08169a3d8 100644 --- a/content/learn/workflow.md +++ b/content/learn/workflow.adoc @@ -5,86 +5,96 @@ weight: 40 aliases: /workflow/ --- -# Workflow +:toc: -These patterns are designed to be composed of multiple components, and for those components to be used in gitops -workflows by consumers and contributors. To use the first pattern as an example, we maintain the [Industrial Edge](/industrial-edge) pattern, which uses a [repo](https://github.com/hybrid-cloud-patterns/industrial-edge) with pattern-specific logic and configuration as well as a [common repo](https://github.com/hybrid-cloud-patterns/common) which has elements common to multiple patterns. The common repository is included in each pattern repository as a subtree. - -## Consuming a pattern += Workflow -1. Fork the pattern repository on GitHub to your workspace (GitHub user or organization). It is necessary to fork because your fork will be updated as part of the GitOps and DevOps processes, and the main branch (by default) will be used in the automated workflows. +These patterns are designed to be composed of multiple components, and for those components to be used in gitops +workflows by consumers and contributors. To use the first pattern as an example, we maintain the link:/industrial-edge[Industrial Edge] pattern, which uses a https://github.com/hybrid-cloud-patterns/industrial-edge[repo] with pattern-specific logic and configuration as well as a https://github.com/hybrid-cloud-patterns/common[common repo] which has elements common to multiple patterns. The common repository is included in each pattern repository as a subtree. -1. Clone the forked copy +[id="consuming-a-pattern"] +== Consuming a pattern - `git clone git@github.com:/industrial-edge.git` +. Fork the pattern repository on GitHub to your workspace (GitHub user or organization). It is necessary to fork because your fork will be updated as part of the GitOps and DevOps processes, and the main branch (by default) will be used in the automated workflows. +. Clone the forked copy ++ +`git clone git@github.com:/industrial-edge.git` -1. Create a local copy of the Helm values file that can safely include credentials +. Create a local copy of the Helm values file that can safely include credentials - DO NOT COMMIT THIS FILE +DO NOT COMMIT THIS FILE You do not want to push personal credentials to GitHub. - ```sh - cp values-secret.yaml.template ~/values-secret.yaml - vi ~/values-secret.yaml - ``` - -1. Customize the deployment for your cluster +[source,terminal] +---- +$ cp values-secret.yaml.template ~/values-secret.yaml +$ vi ~/values-secret.yaml +---- - ```sh - vi values-global.yaml - git commit values-global.yaml - git push - ``` +. Customize the deployment for your cluster ++ +[source,terminal] +---- +$ vi values-global.yaml +$ git commit values-global.yaml +$ git push +---- -## Contributing +[id="contributing"] +== Contributing For contributions, we recommend adding the upstream repository as an additional remote, and making changes on a branch other than main. Changes on this branch can then be merged to the `main` branch (to be reflected in the GitOps workflows) and will be easier to make upstream, if you wish. Contributions from your forked `main` branch will contain, by design: -1. Customizations to `values-global.yaml` and other files that are particular to your installation -1. Commits made by Tekton and other automated processes that will be particular to your installation +. Customizations to `values-global.yaml` and other files that are particular to your installation +. Commits made by Tekton and other automated processes that will be particular to your installation To isolate changes for upstreaming (`hcp` is "Hybrid Cloud Patterns", you can use a different remote and/or branch name if you want): - ```sh - git remote add hcp https://github.com/hybrid-cloud-patterns/industrial-edge - git fetch --all - git branch -b hcp-main -t hcp/main +[source,terminal] +---- +$ git remote add hcp https://github.com/hybrid-cloud-patterns/industrial-edge +$ git fetch --all +$ git branch -b hcp-main -t hcp/main - git push origin hcp-main - ``` +$ git push origin hcp-main +---- To update branch `hcp-main` with upstream changes: - ```sh - git checkout hcp-main - git pull --rebase - ``` +[source,terminal] +---- +$ git checkout hcp-main +$ git pull --rebase +---- To reflect these changes in your forked repository (such as if you would like to submit a PR later): - ```sh - git push origin hcp-main - ``` +[source,terminal] +---- +$ git push origin hcp-main +---- If you want to integrate upstream pattern changes into your local GitOps process: - ```sh - git checkout main - git merge hcp-main - git push origin main - ``` +[source,terminal] +---- +$ git checkout main +$ git merge hcp-main +$ git push origin main +---- Using this workflow, the `hcp-main` branch will: -1. Be isolated from any changes that are being made by your local GitOps processes -1. Be merge-able (or cherry-pick-able) into your local main branch to be used by your local GitOps processes +. Be isolated from any changes that are being made by your local GitOps processes +. Be merge-able (or cherry-pick-able) into your local main branch to be used by your local GitOps processes (this is especially useful for tracking when any submodules, like common, update) -1. Be a good basis for submitting Pull Requests to be integrated upstream, since it will not contain your local configuration differences or your local GitOps commits +. Be a good basis for submitting Pull Requests to be integrated upstream, since it will not contain your local configuration differences or your local GitOps commits -## Changing subtrees +[id="changing-subtrees"] +== Changing subtrees Our patterns use the git subtree feature as a mechanism to promote modularity, so that multiple patterns can use the same common basis. Over time we will move more functionality into common, to isolate the components that are @@ -94,71 +104,79 @@ You only need to change subtrees if you want to test changes in the common/ area For the common cases (use and consumption of the pattern), users do not need to be aware that the pattern uses a subtree at all. - ```sh - git clone https://github.com//industrial-edge - ``` +[source,terminal] +---- +$ git clone https://github.com//industrial-edge +---- If you want to change and track your own version of common, you should fork and clone our common repository separately: - ```sh - git clone https://github.com//common - ``` +[source,terminal] +---- +$ git clone https://github.com//common +---- Now, you can make changes in your fork's main branch, or else make a new branch and make changes there. If you want to track these changes in your fork of the _pattern_ repository (industrial-edge in this case), you will need to swap out the subtree in industrial-edge for the version of common you forked. We have provided a script to make this a bit easier: - ```sh - common/scripts/make_common_subtree.sh - ``` +[source,terminal] +---- +$ common/scripts/make_common_subtree.sh +---- This script will set up a new remote in your local working directory with the repository you specify. It will replace the common directory with a new common from the fork and branch you specify, and commit it. The script will _not_ push the result. For example: - ```sh - common/scripts/make_common_subtree.sh https://github.com/mhjacks/common.git wip-main common-subtree - ``` +[source,terminal] +---- +$ common/scripts/make_common_subtree.sh https://github.com/mhjacks/common.git wip-main common-subtree +---- This will replace common in the current repository with the wip-main branch from the common in mhjacks's common repository, and call the remote common-subtree. From that point, changes from mhjacks's wip-main branch on mhjacks's fork of common can be pulled in this way: - ```sh - git subtree pull --prefix common common-subtree wip-main - ``` +[source,terminal] +---- +$ git subtree pull --prefix common common-subtree wip-main +---- When run without arguments, the script will run as if it had been given the following arguments: - ```sh - common/scripts/make_common_subtree.sh https://github.com/hybrid-cloud-patterns/common.git main common-subtree - ``` +[source,terminal] +---- +$ common/scripts/make_common_subtree.sh https://github.com/hybrid-cloud-patterns/common.git main common-subtree +---- Which are the defaults the repository is normally configured with. -## Subtree vs. Submodule +[id="subtree-vs-submodule"] +== Subtree vs. Submodule It has always been important to us to be have a substrate for patterns that is as easy as possible to share amongst multiple patterns. While it is possible to share changes between multiple unrelated git repositories, it is an almost entirely manual process, prone to error. We feel it is important to be able to provide a "pull" experience (i.e. one git "pull" type action) to update the shared components of a pattern. Two strategies exist for repository sharing in this way: submodule and subtree. We started with submodules but have since moved to subtree. -Atlassian has some good documentation on what subtree is [here](https://blog.developer.atlassian.com/the-power-of-git-subtree/) and [here](https://www.atlassian.com/git/tutorials/git-subtree). In short, a subtree integrates another repository's history into a parent repository, which allows for most of the benefits of a submodule workflow, without most of the caveats. +Atlassian has some good documentation on what subtree is https://blog.developer.atlassian.com/the-power-of-git-subtree/[here] and https://www.atlassian.com/git/tutorials/git-subtree[here]. In short, a subtree integrates another repository's history into a parent repository, which allows for most of the benefits of a submodule workflow, without most of the caveats. Earlier versions of this document described the usage of patterns with submodules instead of subtrees. In the earliest stages of pattern development, we used submodules because the developers of the project were familiar with submodules and had used them previously, but we had not used subtrees. User feedback, as well as some of the unavoidable complexities of submodules, convinced us to try subtrees and we believe we will stick with that strategy. Some of the unavoidable complexities of submodules include: -- Having to remember to checkout repositories with `--recurse-submdules`, or else doing `git submodule init && git submodule sync`. Experienced developers asked in several of our support channels early on why common was empty. -- Hoping that other tools that are interacting with the repository are compatible with the submodule approach. (To be fair, tools like ArgoCD and Tekton Pipelines did this very well; their support of submodules was one of the key reasons we started with submodules) -- When changing branches on a submoduled repository, if the branch you were changing to was pointed to a different revision of the submoduled repository, the repository would show out of sync. While this behavior is correct, it can be surprising and difficult to navigate. -- In disconnected environments, submodules require mirroring more repositories. -- Developing with a fork of the submoduled repository means maintaining two forked repositories and multiple branches in both. +* Having to remember to checkout repositories with `--recurse-submdules`, or else doing `git submodule init && git submodule sync`. Experienced developers asked in several of our support channels early on why common was empty. +* Hoping that other tools that are interacting with the repository are compatible with the submodule approach. (To be fair, tools like ArgoCD and Tekton Pipelines did this very well; their support of submodules was one of the key reasons we started with submodules) +* When changing branches on a submoduled repository, if the branch you were changing to was pointed to a different revision of the submoduled repository, the repository would show out of sync. While this behavior is correct, it can be surprising and difficult to navigate. +* In disconnected environments, submodules require mirroring more repositories. +* Developing with a fork of the submoduled repository means maintaining two forked repositories and multiple branches in both. Subtrees have some pitfalls as well. In the subtree strategy, it is easier to diverge from the upstream version of the subtree repository, and in fact with a typical `git clone`, the user may not be aware that a subtree is in use at all. This can be considered a feature, but could become problematic if the user/consumer later wants to update to a newer version of the subtree but local changes might conflict. Additionally, since subtrees are not as well understood generally, there can be some surprising effects. In practice, we have run into the following: -- Cherry picking from a subtree commit into the parent puts the change in the parent location, not the subtree +* Cherry picking from a subtree commit into the parent puts the change in the parent location, not the subtree -## Contributing to Patterns using Common Subtrees +[id="contributing-to-patterns-using-common-subtrees"] +== Contributing to Patterns using Common Subtrees Once you have forked common and changed your subtree for testing, changes from your fork can then be proposed to [https://github.com/hybrid-cloud-patterns/common.git] and can then be integrated into other patterns. A change to upstream common for a particular upstream pattern would have to be done in two stages: -1. PR the change into upstream's common -1. PR the updated common into the pattern repository +. PR the change into upstream's common +. PR the updated common into the pattern repository