diff --git a/content/learn/vp_add_ops_to_pattern.adoc b/content/learn/vp_add_ops_to_pattern.adoc index d1f513216..5ce9af4a8 100644 --- a/content/learn/vp_add_ops_to_pattern.adoc +++ b/content/learn/vp_add_ops_to_pattern.adoc @@ -3,7 +3,7 @@ menu: learn: parent: Validated patterns frameworks title: Adding Operators to the framework -weight: 23 +weight: 26 aliases: /ocp-framework/adding-operator-to-framework/ --- @@ -39,7 +39,7 @@ This procedure describes how subscriptions to Operators are added to the validat For example, if you want to deploy Advanced Cluster Management, AMQ (messaging) and AMQ Streams (Kafka) in your factory cluster, you would follow the guidance here: The assumption is there is a `values-factory.yaml` file that is used to deploy the factory cluster. The file should include the following entries: -+ + [source,yaml] ---- namespaces: diff --git a/content/learn/vp_add_specific_ops_to_pattern.adoc b/content/learn/vp_add_specific_ops_to_pattern.adoc index d3b915a85..8280d42f9 100644 --- a/content/learn/vp_add_specific_ops_to_pattern.adoc +++ b/content/learn/vp_add_specific_ops_to_pattern.adoc @@ -3,7 +3,7 @@ menu: learn: parent: Validated patterns frameworks title: Adding a specific operator to hub values file -weight: 24 +weight: 27 aliases: /ocp-framework/adding-specific-operator-to-hub/ --- diff --git a/content/patterns/multicloud-gitops/retail-managed-cluster.adoc b/content/patterns/multicloud-gitops/retail-managed-cluster.adoc new file mode 100644 index 000000000..90821ab29 --- /dev/null +++ b/content/patterns/multicloud-gitops/retail-managed-cluster.adoc @@ -0,0 +1,73 @@ +--- +title: Managed cluster sites +weight: 30 +aliases: /retail/retail-managed-cluster/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +//leaving this here on purpose to test H1 headings (with ID) in assemblies and it's impact of TOC +[id="attach-managed-cluster"] += Attach a managed cluster (edge) to the management hub + +The use of this Multicloud GitOps pattern depends on having at least one running Red Hat OpenShift cluster. + +When you install the multi-cloud GitOps pattern, a hub cluster is setup. The hub cluster serves as the central point for managing and deploying applications across multiple clusters. + +include::modules/mcg-understanding-rhacm-requirements.adoc[leveloffset=+1] + +include::modules/mcg-deploying-managed-cluster-using-rhacm.adoc[leveloffset=+1] + +include::modules/comm-deploying-managed-cluster-using-cm-cli-tool.adoc[leveloffset=+1] + +include::modules/comm-deploying-managed-cluster-using-clusteradm-tool.adoc[leveloffset=+1] + +include::modules/comm-designate-cluster-as-managed-cluster-site.adoc[leveloffset=+2] + + +== Verification + +. Go to your managed cluster (edge) OpenShift console and check for the `open-cluster-management-agent` pod being launched. + +[NOTE] +==== +It might take a while for the RHACM agent and `agent-addons` to launch. +==== + +. Check the *Red Hat OpenShift GitOps Operator* is installed. + +. Launch the *Group-One OpenShift ArgoCD* console from nines menu in the top right of the OpenShift console. Verify the applications report the status `Healthy` and `Synched`. + +Verify that the *hello-world* application deployed successfully as follows: + +. Navigate to the *Networking* -> *Routes* menu options on your managed cluster (edge) OpenShift. + +. From the *Project:* drop down select the *hello-world* project. + +. Click the *Location URL*. This should reveal the following: ++ +[source,terminal] +---- +Hello World! + +Hub Cluster domain is 'apps.aws-hub-cluster.openshift.org' +Pod is running on Local Cluster Domain 'apps.aws-hub-cluster.openshift.org' +---- + +Verify that the *config-demo* application deployed successfully as follows: + +. Navigate to the *Networking* -> *Routes* menu options on your managed cluster (edge) OpenShift. + +. Select the *config-demo* *Project*. + +. Click the *Location URL*. This should reveal the following: ++ +[source,terminal] +---- +Hub Cluster domain is 'apps.aws-hub-cluster.openshift.org' +Pod is running on Local Cluster Domain 'apps.aws-hub-cluster.openshift.org' +The secret is `secret` +---- diff --git a/content/patterns/retail/_index.adoc b/content/patterns/retail/_index.adoc new file mode 100644 index 000000000..7dcaa84f7 --- /dev/null +++ b/content/patterns/retail/_index.adoc @@ -0,0 +1,33 @@ +--- +title: Retail +date: 2022-12-08 +tier: tested +summary: This pattern demonstrates a pattern that models the store side of a retail application. +rh_products: +- Red Hat OpenShift Container Platform +- Red Hat Advanced Cluster Management +- Red Hat AMQ +industries: +- Retail +aliases: /retail/ +# uncomment once this exists +# pattern_logo: retail.png +links: + install: getting-started + help: https://groups.google.com/g/validatedpatterns + bugs: https://github.com/validatedpatterns/retail/issues +# uncomment once this exists +ci: retail +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +include::modules/retail-about.adoc[leveloffset=+1] + +include::modules/retail-architecture.adoc[leveloffset=+1] + + + diff --git a/content/patterns/retail/_index.md b/content/patterns/retail/_index.md deleted file mode 100644 index 20b882944..000000000 --- a/content/patterns/retail/_index.md +++ /dev/null @@ -1,76 +0,0 @@ ---- -title: Retail -date: 2022-12-08 -tier: tested -summary: This pattern demonstrates a pattern that models the store side of a retail application. -rh_products: -- Red Hat OpenShift Container Platform -- Red Hat Advanced Cluster Management -- Red Hat AMQ -industries: -- Retail -aliases: /retail/ -# uncomment once this exists -# pattern_logo: retail.png -links: - install: getting-started - help: https://groups.google.com/g/validatedpatterns - bugs: https://github.com/validatedpatterns/retail/issues -# uncomment once this exists -ci: retail ---- - -# Retail Pattern - -## Background - -This pattern demonstrates a pattern that models the store side of a retail application. - -It is derived from the [Quarkus Coffeeshop Demo](https://quarkuscoffeeshop.github.io) done by Red -Hat Solution Architects. The demo models the use of multiple application microservices which use Kafka messaging to interact and a Postgres database to persist data. (There is a homeoffice analytics suite in the demo that we hope to include in a later version of the pattern. - -This demo pulls together several different strands of the demo and allows for multiple stores to be installed on remote clusters via ACM if the user desires. - -The demo allows users to go to the store's web page, order drinks and food items, and see those items "made" and served by the microservices in real time. - -The pattern includes build pipelines and a demo space, so that changes to the applications can be tested prior to "production" deployments. - -### Solution elements - -- How to use a GitOps approach to keep in control of configuration and operations -- How to centrally manage multiple clusters, including workloads -- How to build and deploy workloads across clusters using modern CI/CD -- How to architect a modern application using microservices and Kafka in Java - -### Red Hat Technologies - -- Red Hat OpenShift Container Platform (Kubernetes) -- Red Hat Advanced Cluster Management (Open Cluster Management) -- Red Hat OpenShift GitOps (ArgoCD) -- Red Hat OpenShift Pipelines (Tekton) -- Red Hat AMQ Streams (Apache Kafka Event Broker) - -## Architecture - -The following diagram shows the relationship between the microservices, messaging, and database components: - -[![Retail Pattern Architecture](/images/retail/retail-architecture.png)](/images/retail/retail-architecture.png) - -- The hub. This cluster hosts the CI/CD pipelines, a test instance of the applications and messaging/database services for testing purposes, and a single functional store. -- Optional remote clusters. Each remote site can support a complete store environment. The default one modelled is a "RALEIGH" store location. - -### Demo Scenario - -The Retail Validated Pattern / Demo Scenario is focused in the Quarkus Coffeeshop retail experience. In a full retail -environment, it would be easy to be overwhelmed by things like item files, tax tables, item movement/placement within the store and so on, so the demo does not attempt to model all those elements - instead offering a subset of services to give a sense of how data can flow in such a system, how microservices should interact (via API calls and message passing), and where data can be persisted. - -In the future we hope to expand this pattern with the homeoffice components, to further demonstrate how data can flow from leaf nodes to centralized data analytics services, which are crucial in retail IT environments. - -- Web Service - the point of sale within the store. Shows the menu, and allows the user to order food and drinks, and shows when orders are ready. -- Counter service - the "heart" of the store operation - receives orders and dispatches them to the barista and kitchen services, as appropriate. Users may order as many food and drink items in one order as they wish. -- Barista - the service responsible for providing items from the "drinks" side of the menu. -- Kitchen - the service responsible for providing items from the "food" side of the menu. - -Further documentation on the individual services is available at the upstream [Quarkus Coffeeshop](https://quarkuscoffeeshop.github.io/) documentation site. - -[![Demo Scenario](/images/retail/retail-highlevel.png)](/images/retail/retail-highlevel.png) diff --git a/content/patterns/retail/application.adoc b/content/patterns/retail/application.adoc new file mode 100644 index 000000000..11d1afbfa --- /dev/null +++ b/content/patterns/retail/application.adoc @@ -0,0 +1,12 @@ +--- +title: Demonstrating a retail example applications +weight: 40 +aliases: /retail/application-demos/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +include::modules/retail-example-applications.adoc[] diff --git a/content/patterns/retail/application.md b/content/patterns/retail/application.md deleted file mode 100644 index 8da0951f0..000000000 --- a/content/patterns/retail/application.md +++ /dev/null @@ -1,47 +0,0 @@ ---- -title: Application Demos -weight: 30 -aliases: /retail/application/ ---- - -# Demonstrating Retail example applications - -## Background - -Up until now the Retail validated pattern has focused primarily on successfully deploying the architectural pattern. Now it is time to see the actual applications running as we have deployed them. - -If you have already deployed the hub cluster, then you have already seen several applications deployed in the OpenShift GitOps console. If you haven't done this then we recommend you deploy the hub after you have setup the Quay repositories described below. - -## Ordering Items at the Coffeeshop - -The easiest way to get to the coffeeshop store page is from the OpenShift Console Menu Landing Page entry: - -[![retail-v1-console-menu](/images/retail/retail-v1-console-menu.png)](/images/retail/retail-v1-console-menu.png) - -Clicking on the Quarkus Coffeeshop Landing Page link will bring you to this page: - -[![retail-v1-landing-page](/images/retail/retail-v1-landing-page.png)](/images/retail/retail-v1-landing-page.png) - -And clicking on either the "Store Web Page" or "TEST Store Web Page" links will bring you to a screen that looks like this: - -[![retail-v1-store-page](/images/retail/retail-v1-store-page.png)](/images/retail/retail-v1-store-page.png) - -*NOTE*: The applications are initially identical. The "TEST" site is deployed to the `quarkuscoffeeshop-demo` namespace; the regular Store site is deployed to the `quarkuscoffeeshop-store` namespace. - -Each store requires supporting services, in PostgreSQL and Kafka. In our pattern, PostgreSQL is provided by the Crunchy PostgreSQL operator, and Kafka is provided by the Red Hat AMQ Streams operator. Each instance, the regular instance and the TEST instance, has its own instance of each of these supporting services it uses. - -To order, click on the "Place an Order" button on the front page. The menu should look like this: - -[![retail-v1-store-web-menu](/images/retail/retail-v1-store-web-menu.png)](/images/retail/retail-v1-store-web-menu.png) - -Click the "Add" button next to a menu item; the item name will appear. Add a name for the order: - -[![retail-v1-order-p1](/images/retail/retail-v1-order-p1.png)](/images/retail/retail-v1-order-p1.png) - -You can add as many orders as you want. On your last item, click the "Place Order" button on the item dialog: - -[![retail-v1-place-order](/images/retail/retail-v1-place-order.png)](/images/retail/retail-v1-place-order.png) - -As the orders are serviced by the barista and kitchen services, you can see their status in the "Orders" section of the page: - -[![retail-v1-orders-status](/images/retail/retail-v1-orders-status.png)](/images/retail/retail-v1-orders-status.png) diff --git a/content/patterns/retail/cluster-sizing.adoc b/content/patterns/retail/cluster-sizing.adoc new file mode 100644 index 000000000..106dcaca5 --- /dev/null +++ b/content/patterns/retail/cluster-sizing.adoc @@ -0,0 +1,14 @@ +--- +title: Cluster Sizing +weight: 60 +aliases: /retail/retail-cluster-sizing/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY + +include::modules/comm-attributes.adoc[] +include::modules/retail/metadata-retail.adoc[] + +include::modules/cluster-sizing-template.adoc[] diff --git a/content/patterns/retail/cluster-sizing.md b/content/patterns/retail/cluster-sizing.md deleted file mode 100644 index 9cedcba28..000000000 --- a/content/patterns/retail/cluster-sizing.md +++ /dev/null @@ -1,122 +0,0 @@ ---- -title: Cluster Sizing -weight: 50 -aliases: /retail/cluster-sizing/ ---- -# OpenShift Cluster Sizing for the Retail Pattern - -## Tested Platforms - -The **retail** pattern has been tested in the following Certified Cloud Providers. - - -| **Certified Cloud Providers** | 4.10 | -| :---- | :---- | -| Amazon Web Services| :heavy_check_mark: | -| Microsoft Azure| | -| Google Cloud Platform| | - - -## General OpenShift Minimum Requirements - -OpenShift 4 has the following minimum requirements for sizing of nodes: - -* **Minimum 4 vCPU** (additional are strongly recommended). -* **Minimum 16 GB RAM** (additional memory is strongly recommended, especially if etcd is colocated on masters). -* **Minimum 40 GB** hard disk space for the file system containing /var/. -* **Minimum 1 GB** hard disk space for the file system containing /usr/local/bin/. - -There are several applications that comprise the **retail** pattern. In addition, the **retail** pattern also includes a number of supporting operators that are installed by **OpenShift GitOps** using ArgoCD. - -### Retail Pattern OpenShift Datacenter HUB Cluster Size - -The retail pattern has been tested with a defined set of specifically tested configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture. - -The Datacenter HUB OpenShift Cluster is made up of the the following on the AWS deployment tested: - -| Node Type | Number of nodes | Cloud Provider | Instance Type -| :---- | :----: | :---- | :---- -| Master | 3 | Amazon Web Services | m5.xlarge -| Worker | 3 | Amazon Web Services | m5.xlarge - -The Datacenter HUB OpenShift cluster needs to be a bit bigger than the Factory/Edge clusters because this is where the developers will be running pipelines to build and deploy the **Industrial Edge** pattern on the cluster. The above cluster sizing is close to a **minimum** size for a Datacenter HUB cluster. In the next few sections we take some snapshots of the cluster utilization while the **Industrial Edge** pattern is running. Keep in mind that resources will have to be added as more developers are working building their applications. - -### Retail Pattern OpenShift Store Edge Cluster Size - -The OpenShift cluster is made of 3 Nodes combining Master/Workers for the Edge/Factory cluster. - -| Node Type | Number of nodes | Cloud Provider | Instance Type -| :----: | :----: | :----: | :----: -| Master/Worker | 3 | Google Cloud | n1-standard-8 -| Master/Worker | 3 | Amazon Cloud Services | m5.2xlarge -| Master/Worker | 3 | Microsoft Azure | Standard_D8s_v3 - -### AWS Instance Types - -The **retail** pattern was tested with the highlighted AWS instances in **bold**. The OpenShift installer will let you know if the instance type meets the minimum requirements for a cluster. - -The message that the openshift installer will give you will be similar to this message - -```text -INFO Credentials loaded from default AWS environment variables -FATAL failed to fetch Metadata: failed to load asset "Install Config": [controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 4 vCPUs, controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 16384 MiB Memory] -``` - -Below you can find a list of the AWS instance types that can be used to deploy the **retail** pattern. - -| Instance type | Default vCPUs | Memory (GiB) | Datacenter | Factory/Edge -| :------: | :-----: | :-----: | :----: | :----: -| | | | 3x3 OCP Cluster | 3 Node OCP Cluster -| m4.xlarge | 4 | 16 | N | N -| m4.2xlarge | 8 | 32 | Y | Y -| m4.4xlarge | 16 | 64 | Y | Y -| m4.10xlarge | 40 | 160 | Y | Y -| m4.16xlarge | 64 | 256 | Y | Y -| **m5.xlarge** | 4 | 16 | Y | N -| m5.2xlarge | 8 | 32 | Y | Y -| m5.4xlarge | 16 | 64 | Y | Y -| m5.8xlarge | 32 | 128 | Y | Y -| m5.12xlarge | 48 | 192 | Y | Y -| m5.16xlarge | 64 | 256 | Y | Y -| m5.24xlarge | 96 | 384 | Y | Y - -The OpenShift cluster is made of 3 Masters and 3 Workers for the Datacenter and the Edge/Factory cluster are made of 3 Master/Worker nodes. For the node sizes we used the **m5.xlarge** on AWS and this instance type met the minimum requirements to deploy the **retail** pattern successfully on the Datacenter hub. On the Factory/Edge cluster we used the **m5.2xlarge** since the minimum cluster was comprised of 3 nodes. . - -To understand better what types of nodes you can use on other Cloud Providers we provide some of the details below. - -### Azure Instance Types - -The **retail** pattern was also deployed on Azure using the **Standard_D8s_v3** VM size. Below is a table of different VM sizes available for Azure. Keep in mind that due to limited access to Azure we only used the **Standard_D8s_v3** VM size. - -The OpenShift cluster is made of 3 Master and 3 Workers for the Datacenter cluster. - -The OpenShift cluster is made of 3 Nodes combining Master/Workers for the Edge/Factory cluster. - -| Type | Sizes | Description -| :---- | :---- | :---- -| [General purpose](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-general) |B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dv4, Dsv4, Ddv4, Ddsv4 | Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. -| [Compute optimized](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-compute) | F, Fs, Fsv2, FX | High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. -| [Memory optimized](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-memory) | Esv3, Ev3, Easv4, Eav4, Ev4, Esv4, Edv4, Edsv4, Mv2, M, DSv2, Dv2 | High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. -| [Storage optimized](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-storage) | Lsv2 | High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. -| [GPU](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu) | NC, NCv2, NCv3, NCasT4_v3, ND, NDv2, NV, NVv3, NVv4 | Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. -| [High performance compute](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-hpc) | HB, HBv2, HBv3, HC, H | Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA). - -For more information please refer to the [Azure VM Size Page](https://docs.microsoft.com/en-us/azure/virtual-machines/sizes). - -### Google Cloud (GCP) Instance Types - -The **retail** pattern was also deployed on GCP using the **n1-standard-8** VM size. Below is a table of different VM sizes available for GCP. Keep in mind that due to limited access to GCP we only used the **n1-standard-8** VM size. - -The OpenShift cluster is made of 3 Master and 3 Workers for the Datacenter cluster. - -The OpenShift cluster is made of 3 Nodes combining Master/Workers for the Edge/Factory cluster. - -The following table provides VM recommendations for different workloads. - -| **General purpose** | **Workload optimized** -| Cost-optimized | Balanced | Scale-out optimized | Memory-optimized |Compute-optimized | Accelerator-optimized -| :---- | :---- | :---- | :---- | :---- | :---- -| E2 | N2, N2D, N1 | T2D | M2, M1 | C2 | A2 -Day-to-day computing at a lower cost | Balanced price/performance across a wide range of VM shapes | Best performance/cost for scale-out workloads | Ultra high-memory workloads | Ultra high performance for compute-intensive workloads | Optimized for high performance computing workloads - -For more information please refer to the [GCP VM Size Page](https://cloud.google.com/compute/docs/machine-types). diff --git a/content/patterns/retail/components.adoc b/content/patterns/retail/components.adoc new file mode 100644 index 000000000..e27b31195 --- /dev/null +++ b/content/patterns/retail/components.adoc @@ -0,0 +1,67 @@ +--- +title: Details of the components +weight: 50 +aliases: /retail/component-details/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + + +=== The Quarkus Coffeeshop Store https://github.com/validatedpatterns/retail/tree/main/charts/store/quarkuscoffeeshop-charts[Chart] + +This chart is responsible for deploying the applications, services and routes for the Quarkus Coffeeshop demo. It models a set of microservices +that would make sense for a coffeeshop retail operation. The detail of what the microservices do is https://quarkuscoffeeshop.github.io/coffeeshop/[here]. + +* https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-web[quarkuscoffeeshop-web] - Serves as the `front end` for ordering food and drinks. + +* https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-counter[quarkuscoffeeshop-counter] - The counter service receives the orders, persists them in the database, and notifies when they are ready. + +* https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-barista[quarkuscoffeeshop-barista] - The barista service is responsible for preparing items from the `drink` side of the menu. + +* https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-kitchen[quarkuscoffeeshop-kitchen]- The kitchen service is responsible for preparing items from the `food` side of the menu. + +* https://github.com/quarkuscoffeeshop/customerloyalty[quarkuscoffeeshop-customerloyalty] - The customerloyalty service is responsible for generating customer loyalty events, when a customer enters the `rewards` email. This data is not persisted or tracked anywhere. + +* https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-inventory[quarkuscoffeeshop-inventory] - The inventory service is responsible for tracking food and drink inventory. + +* https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-customermocker[quarkuscoffeeshop-customermocker] - The customermocker can be used to generate test traffic. + +* https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-majestic-monolith[quarkuscoffeeshop-majestic-monolith] - The `majestic monolith` builds all the apps into a single bundle, to simplify the process of deploying this app on single node systems. + +All the components look like this in ArgoCD when deployed: + +image:/images/retail/retail-v1-argo-coffeeshop-store.png[retail-v1-argo-coffeeshop-store] + +The chart is designed such that the same chart can be deployed in the hub cluster as the `production` store, the `demo` or `TEST` store, and on a remote cluster. + +=== The Quarkus Coffeeshop Database https://github.com/validatedpatterns/retail/tree/main/charts/all/crunchy-pgcluster[Chart] + +This installs a database instance suitable for use in the Retail pattern. It uses the Crunchy PostgreSQL https://github.com/CrunchyData/postgres-operator[Operator] to provide PostgreSQL services, which includes high availability and backup services by default, and other features available. + +Similar to the store chart, the Database chart can be deployed in the same variety of scenarios. + +In ArgoCD, it looks like this: + +image:/images/retail/retail-v1-argo-coffeeshopdb.png[retail-v1-argo-coffeeshopdb] + +=== The Quarkus Coffeeshop Kafka https://github.com/validatedpatterns/retail/tree/main/charts/all/quarkuscoffeeshop-kafka[Chart] + +This chart installs Kafka for use in the Retail pattern. It uses the Red Hat AMQ Streams +https://access.redhat.com/documentation/en-us/red_hat_amq/7.2/html/using_amq_streams_on_openshift_container_platform/index[operator]. + +=== The Quarkus Coffeeshop Pipelines https://github.com/validatedpatterns/retail/tree/main/charts/hub/quarkuscoffeeshop-pipelines[Chart] + +The pipelines chart defines build pipelines using the Red Hat OpenShift Pipelines https://catalog.redhat.com/software/operators/detail/5ec54a4628834587a6b85ca5[Operator] (tektoncd). Pipelines are provided for all of the application images that ship with the pattern; the pipelines all build the app from source, deploy them to the `demo` namespace, and push them to the configured image registry. + +Like the store and database charts, the kafka chart supports all three modes of deployment. + +image:/images/retail/retail-v1-argo-pipelines.png[retail-v1-argo-pipelines] + +=== The Quarkus Coffeeshop Landing Page https://github.com/validatedpatterns/retail/tree/main/charts/all/landing-page[Chart] + +The Landing Page chart builds the page that presents the links for the demos in the pattern. + +image:/images/retail/retail-v1-argo-landing-page.png[retail-v1-landing-page] diff --git a/content/patterns/retail/components.md b/content/patterns/retail/components.md deleted file mode 100644 index a3448224f..000000000 --- a/content/patterns/retail/components.md +++ /dev/null @@ -1,77 +0,0 @@ ---- -title: Components -weight: 30 -aliases: /retail/components/ ---- - -# Component Details - -## The Quarkus Coffeeshop Store [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/store/quarkuscoffeeshop-charts) - -This chart is responsible for deploying the applications, services and routes for the Quarkus Coffeeshop demo. It models a set of microservices that would make sense for a coffeeshop retail operation. The detail of what the microservices do is [here](https://quarkuscoffeeshop.github.io/coffeeshop/). - -* [quarkuscoffeeshop-web](https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-web) - -Serves as the "front end" for ordering food and drinks. - -* [quarkuscoffeeshop-counter](https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-counter) - -The counter service receives the orders, persists them in the database, and notifies when they are ready. - -* [quarkuscoffeeshop-barista](https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-barista) - -The barista service is responsible for preparing items from the "drink" side of the menu. - -* [quarkuscoffeeshop-kitchen](https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-kitchen) - -The kitchen service is responsible for preparing items from the "food" side of the menu. - -* [quarkuscoffeeshop-customerloyalty](https://github.com/quarkuscoffeeshop/customerloyalty) - -The customerloyalty service is responsible for generating customer loyalty events, when a customer enters the "rewards" email. This data is not persisted or tracked anywhere. - -* [quarkuscoffeeshop-inventory](https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-inventory) - -The inventory service is responsible for tracking food and drink inventory. - -* [quarkuscoffeeshop-customermocker](https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-customermocker) - -The customermocker can be used to generate test traffic. - -* [quarkuscoffeeshop-majestic-monolith](https://github.com/quarkuscoffeeshop/quarkuscoffeeshop-majestic-monolith) - -The "majestic monolith" builds all the apps into a single bundle, to simplify the process of deploying this app on single node systems. - -All the components look like this in ArgoCD when deployed: - -[![retail-v1-argo-coffeeshop-store](/images/retail/retail-v1-argo-coffeeshop-store.png)](/images/retail/retail-v1-argo-coffeeshop-store.png) - -The chart is designed such that the same chart can be deployed in the hub cluster as the "production" store, the "demo" or TEST store, and on a remote cluster. - -## The Quarkus Coffeeshop Database [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/all/crunchy-pgcluster) - -This installs a database instance suitable for use in the Retail pattern. It uses the Crunchy PostgreSQL [Operator](https://github.com/CrunchyData/postgres-operator) to provide PostgreSQL services, which includes high availability and backup services by default, and other features available. - -Like the store chart, the Database chart can be deployed in the same different scenarios. - -In ArgoCD, it looks like this: - -[![retail-v1-argo-coffeeshopdb](/images/retail/retail-v1-argo-coffeeshopdb.png)](/images/retail/retail-v1-argo-coffeeshopdb.png) - -## The Quarkus Coffeeshop Kafka [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/all/quarkuscoffeeshop-kafka) - -This chart installs Kafka for use in the Retail pattern. It uses the Red Hat AMQ Streams [operator](https://access.redhat.com/documentation/en-us/red_hat_amq/7.2/html/using_amq_streams_on_openshift_container_platform/index). - -## The Quarkus Coffeeshop Pipelines [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/hub/quarkuscoffeeshop-pipelines) - -The pipelines chart defines build pipelines using the Red Hat OpenShift Pipelines [Operator](https://catalog.redhat.com/software/operators/detail/5ec54a4628834587a6b85ca5) (tektoncd). Pipelines are provided for all of the application images that ship with the pattern; the pipelines all build the app from source, deploy them to the "demo" namespace, and push them to the configured image registry. - -Like the store and database charts, the kafka chart supports all three modes of deployment. - -[![retail-v1-argo-pipelines](/images/retail/retail-v1-argo-pipelines.png)](/images/retail/retail-v1-argo-pipelines.png) - -## The Quarkus Coffeeshop Landing Page [Chart](https://github.com/validatedpatterns/retail/tree/main/charts/all/landing-page) - -The Landing Page chart builds the page that presents the links for the demos in the pattern. - -[![retail-v1-landing-page](/images/retail/retail-v1-argo-landing-page.png)](/images/retail/retail-v1-argo-landing-page.png) diff --git a/content/patterns/retail/getting-started.adoc b/content/patterns/retail/getting-started.adoc new file mode 100644 index 000000000..35dfb6049 --- /dev/null +++ b/content/patterns/retail/getting-started.adoc @@ -0,0 +1,14 @@ +--- +title: Getting Started +weight: 10 +aliases: /retail/getting-started/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +include::modules/retail-deploying.adoc[leveloffset=1] + + diff --git a/content/patterns/retail/getting-started.md b/content/patterns/retail/getting-started.md deleted file mode 100644 index 977ed63a4..000000000 --- a/content/patterns/retail/getting-started.md +++ /dev/null @@ -1,168 +0,0 @@ ---- -title: Getting Started -weight: 10 -aliases: /retail/getting-started/ ---- - -# Deploying the Retail Pattern - -## Prerequisites - -1. An OpenShift cluster (Go to [the OpenShift console](https://console.redhat.com/openshift/create)). Cluster must have a dynamic StorageClass to provision PersistentVolumes. See also [sizing your cluster](../../retail/cluster-sizing). -1. (Optional) A second OpenShift cluster for a second store environment, "raleigh". -1. A GitHub account -1. (Optional) A quay account that can update images; this is if you want to use the pipelines to customize the applications -1. (Optional) A quay account with the following repositories set as public, and which you can write to: - - - quarkuscoffeeshop-barista - - quarkuscoffeeshop-counter - - quarkuscoffeeshop-customerloyalty - - quarkuscoffeeshop-customermocker - - quarkuscoffeeshop-inventory - - quarkuscoffeeshop-kitchen - - quarkuscoffeeshop-majestic-monolith - - quarkuscoffeeshop-web - - These repos comprise the microservices that are in the demo. The public repos (quay.io/hybrid-cloud-patterns/*) contain pre-built images which will be downloaded and used by default; so the demo will run regardless of whether you choose to rebuild the apps or not. This mechanism is provided for transparency purposes (so you can reproduce the same results); or if you want to customize or change the apps themselves in some way. - -The use of this pattern depends on having at least one running Red Hat -OpenShift cluster. All of the apps will run on a single cluster; optionally you can use RHACM to apply the store apps to a second cluster. - -If you do not have a running Red Hat OpenShift cluster you can start one on a -public or private cloud by using [Red Hat's cloud -service](https://console.redhat.com/openshift/create). - -## Prerequisite Tools - -Install the installation tooling dependencies. You will need: - -{% include prerequisite-tools.md %} - -## How to deploy - -1. Fork the [retail](https://github.com/validatedpatterns/retail) repository on GitHub. - -1. Clone the forked copy of the `retail` repo. Use branch `v1.0'. - - ```sh - git clone git@github.com:{your-username}/retail.git - cd retail - git checkout v1.0 - ``` - -1. You could create your own branch where you specific values will be pushed to: - - ```sh - git checkout -b my-branch - ``` - -1. A `values-secret.yaml` file is used to automate setup of secrets needed for: - - - A container image registry (E.g. Quay) - - DO NOT COMMIT THIS FILE. You do not want to push personal credentials to GitHub. - - ```sh - cp values-secret.yaml.template ~/values-secret.yaml - vi ~/values-secret.yaml - ``` - -1. Customize the deployment for your cluster. Change the appropriate values in `values-global.yaml` - - ```sh - vi values-global.yaml - git add values-global.yaml - git commit values-global.yaml - git push origin my-branch - ``` - -In particular, the values that you need to change are under the `imageregistry` key, to use your own account and hostname. If you like, you can change the git settings (`account`, `email`, `hostname` to reflect your own account settings). - -If you plan to customize the build of the applications themselves, there `revision` and `imageTag` settings for each of them. The defaults should suffice if you just want to see the apps running. - -1. You can deploy the pattern using the [validated pattern operator](/infrastructure/using-validated-pattern-operator/). If you do use the operator then skip to Validating the Environment below. - -1. Preview the changes - - ```sh - ./pattern.sh make show - ``` - -1. Login to your cluster using oc login or exporting the KUBECONFIG - - ```sh - oc login - ``` - - or - - ```sh - export KUBECONFIG=~/my-ocp-env/retail-hub - ``` - -1. Apply the changes to your cluster - - ```sh - ./pattern-util.sh make install - ``` - -This will execute `make install` in the team's container, which will take a bit to load the first time. It contains ansible and other dependencies so that you do not need to install them on your workstation. - -The default `install` target will: - -1. Install the pattern via the operator -1. Load the imageregistry secret into the vault -1. Start the application build pipelines - -If you chose not to put in your registry credential, `make install` cannot complete successfully because it waits for the secret to be populated before starting the pipelines. - -If you do not want to run the (optional) components, another install target is provided: - -```text -./common/scripts/pattern-util.sh make install-no-pipelines -``` - -This skips the vault setup and the pipeline builds, but still installs both Vault and the Pipelines operator, so if you want to run those in your installation later, you can run `make install` to enable them. - -For more information on secrets management see [here](/secrets). For information on Hashicorp's Vault see [here](/secrets/vault) - -## Validating the Environment - -Check the operators have been installed - - ```text - UI -> Installed Operators - ``` - -[![retail-v1-operators](/images/retail/retail-v1-operators.png)](/images/retail/retail-v1-operators.png) - -The OpenShift console menu should look like this. We will use it to validate that the pattern is working as expected: - -[![retail-v1-console-menu](/images/retail/retail-v1-console-menu.png)](/images/retail/retail-v1-console-menu.png) - -Check on the pipelines, if you chose to run them. They should all complete successfully: - -[![retail-v1-pipelines](/images/retail/retail-v1-pipelines.png)](/images/retail/retail-v1-pipelines.png) - -Ensure that the Hub ArgoCD instance shows all of its apps in Healthy and Synced status once all of the images have been built: - -[![retail-v1-argo-apps-p1](/images/retail/retail-v1-argo-apps-p1.png)](/images/retail/retail-v1-argo-apps-p1.png) - -We will go to the Landing Page, which will present the applications in the pattern: - -[![retail-v1-landing-page](/images/retail/retail-v1-landing-page.png)](/images/retail/retail-v1-landing-page.png) - -Clicking on the Store Web Page will place us in the Quarkus Coffeeshop Demo: - -[![retail-v1-store-page](/images/retail/retail-v1-store-page.png)](/images/retail/retail-v1-store-page.png) - -Clicking on the TEST Store Web Page will place us in a separate copy of the same demo. - -Clicking on the respective Kafdrop links will go to a Kafdrop instance that allows inspection of each of the respective environments. - -[![retail-v1-kafdrop](/images/retail/retail-v1-kafdrop.png)](/images/retail/retail-v1-kafdrop.png) - -## Next Steps - -[Help & Feedback](https://groups.google.com/g/validatedpatterns){: .btn .fs-5 .mb-4 .mb-md-0 .mr-2 } -[Report Bugs](https://github.com/validatedpatterns/retail/issues){: .btn .btn-red .fs-5 .mb-4 .mb-md-0 .mr-2 } diff --git a/content/patterns/retail/ideas-for-customization.md b/content/patterns/retail/ideas-for-customization.md deleted file mode 100644 index 5d450b128..000000000 --- a/content/patterns/retail/ideas-for-customization.md +++ /dev/null @@ -1,8 +0,0 @@ ---- -title: Ideas for Customization -weight: 60 -aliases: /retail/ideas-for-customization/ ---- - -# Ideas for Customization - diff --git a/content/patterns/retail/store.adoc b/content/patterns/retail/store.adoc new file mode 100644 index 000000000..d23e3ca9c --- /dev/null +++ b/content/patterns/retail/store.adoc @@ -0,0 +1,76 @@ +--- +title: Managed store sites +weight: 30 +aliases: /retail/retail-managed-cluster/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== Having a store (edge) cluster join the datacenter (hub) + +=== Allow ACM to deploy the store application to a subset of clusters + +A store ("`ATLANTA`") is installed on the hub cluster by default. This feature is interesting if you want to see how ACM can manage a remote +cluster to install the same application on a different cluster. + +The way we apply this is through the `managedClusterGroups` block in `values-hub.yaml`: + +[source,yaml] +---- + managedClusterGroups: + raleigh: + name: store-raleigh + helmOverrides: + # Values must be strings! + - name: clusterGroup.isHubCluster + value: "false" + clusterSelector: + matchLabels: + clusterGroup: store-raleigh + matchExpressions: + - key: vendor + operator: In + values: + - OpenShift +---- + +Any cluster joined with the label `clusterGroup=store-raleigh` is assigned the policies that deploy the store app to them. + +[id="attach-managed-cluster"] +=== Attaching a managed cluster (edge) to the management hub + +The use of this pattern depends on having at least one running Red Hat OpenShift cluster. + +When you install the retail GitOps pattern, a hub cluster is setup. The hub cluster serves as the central point for managing and deploying applications across multiple clusters. + +include::modules/retail-understanding-rhacm-requirements.adoc[leveloffset=+1] + +include::modules/mcg-deploying-managed-cluster-using-rhacm.adoc[leveloffset=+1] + +include::modules/comm-deploying-managed-cluster-using-cm-cli-tool.adoc[leveloffset=+1] + +include::modules/comm-deploying-managed-cluster-using-clusteradm-tool.adoc[leveloffset=+1] + +include::modules/comm-designate-cluster-as-managed-cluster-site.adoc[leveloffset=+2] + + +== Verification + +. Go to your managed cluster (edge) OpenShift console and check for the `open-cluster-management-agent` pod being launched. + +[NOTE] +==== +It might take a while for the RHACM agent and `agent-addons` to launch. +==== + +=== Store is joined + +==== You’re done + +That is it! Go to your store (edge) OpenShift console and check for the open-cluster-management-agent pod being launched. Be patient, it will +take a while for the ACM agent and agent-addons to launch. After that, the operator OpenShift GitOps will run. When it is finished coming up +launch the OpenShift GitOps (ArgoCD) console from the top right of the +OpenShift console. diff --git a/content/patterns/retail/store.md b/content/patterns/retail/store.md deleted file mode 100644 index 879ae34f4..000000000 --- a/content/patterns/retail/store.md +++ /dev/null @@ -1,104 +0,0 @@ ---- -title: Store Sites -weight: 20 -aliases: /retail/store/ ---- - -# Having a store (edge) cluster join the datacenter (hub) - -## Allow ACM to deploy the store application to a subset of clusters - -A store ("ATLANTA") is installed on the hub cluster by default. This feature is interesting if you want to see how ACM can manage a remote cluster to install the same application on a different cluster. - -The way we apply this is through the managedClusterGroups block in `values-hub.yaml`: - -```json - managedClusterGroups: - - name: store - clusterSelector: - matchLabels: - clusterGroup: raleigh - matchExpressions: - - key: vendor - operator: In - values: - - OpenShift -``` - -Any cluster joined with the label `clusterGroup=raleigh` will be assigned the policies that deploy the store app to them. - -## Deploy a store cluster - -Rather than provide instructions on creating a store cluster it is assumed -that an OpenShift cluster has already been created. Use the `openshift-install` program provided at [cloud.redhat.com](https://console.redhat.com/openshift/create "Create an OpenShift cluster") - -There are a three ways to join the store to the datacenter. - -* Using the ACM user interface -* Using the `cm` tool -* Using the `clusteradm` tool - -## Store setup using the ACM UI - -After ACM is installed a message regarding a "Web console update is available" may be displayed. -Click on the "Refresh web console" link. - -On the upper-left side you'll see a pull down labeled "local-cluster". Select "All Clusters" from this pull down. -This will navigate to the ACM console and to its "Clusters" section - -Select the "Import cluster" option beside the Create Cluster button. - -![import-cluster](/images/import-cluster.png "Select Import cluster") - -On the "Import an existing cluster" page, enter the cluster name and choose Kubeconfig as the "import mode". Add the tag `site=store` Press import. Done. - -![import-with-kubeconfig](/images/import-with-kubeconfig.png "Import using kubeconfig") - -Using this method, you are done. Skip to the section [Store is joined](#store-is-joined) but ignore the part about adding the site tag. - -## Store setup using `cm` tool - -1. Install the `cm` (cluster management) command-line tool. See details [here](https://github.com/open-cluster-management/cm-cli/#installation) - -1. Obtain the KUBECONFIG file from the edge/store cluster. - -1. On the command-line login into the hub/datacenter cluster (use `oc login` or export the KUBECONFIG). - -1. Run the following command: - -```sh -cm attach cluster --cluster --cluster-kubeconfig -``` - -Skip to the section [Store is joined](#store-is-joined) - -## Store setup using `clusteradm` tool - -You can also use `clusteradm` to join a cluster. The following instructions explain what needs to be done. `clusteradm` is still in testing. - -1. To deploy a edge cluster you will need to get the datacenter (or hub) cluster's token. You will need to install `clusteradm`. On the existing *datacenter cluster*: - - `clusteradm get token` - -1. When you run the `clusteradm` command above it replies with the token and also shows you the command to use on the store. So first you must login to the store cluster - - `oc login` - or - - `export KUBECONFIG=~/my-ocp-env/store` - -1. Then request to that the store join the datacenter hub - - `clusteradm join --hub-token ` - -1. Back on the hub cluster accept the join request - - `clusteradm accept --clusters ` - -Skip to the next section, [Store is joined](#store-is-joined) - -## Store is joined - -### You're done - -That's it! Go to your store (edge) OpenShift console and check for the open-cluster-management-agent pod being launched. Be patient, it will take a while for the ACM agent and agent-addons to launch. After that, the operator OpenShift GitOps will run. When it's finished coming up launch the OpenShift GitOps (ArgoCD) console from the top right of the OpenShift console. diff --git a/content/patterns/retail/troubleshooting.md b/content/patterns/retail/troubleshooting.md deleted file mode 100644 index afc1800fa..000000000 --- a/content/patterns/retail/troubleshooting.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: Troubleshooting -weight: 40 -aliases: /retail/troubleshooting/ ---- - -# Troubleshooting - -## Our [Issue Tracker](https://github.com/validatedpatterns/industrial-edge/issues) diff --git a/content/patterns/travelops/cluster-sizing.adoc b/content/patterns/travelops/cluster-sizing.adoc new file mode 100644 index 000000000..228882ca0 --- /dev/null +++ b/content/patterns/travelops/cluster-sizing.adoc @@ -0,0 +1,14 @@ +--- +title: Cluster Sizing +weight: 60 +aliases: /retail/travelops-sizing/ +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY + +include::modules/comm-attributes.adoc[] +include::modules/travelops/metadata-travelops.adoc[] + +include::modules/cluster-sizing-template.adoc[] diff --git a/modules/comm-deploying-managed-cluster-using-cm-cli-tool.adoc b/modules/comm-deploying-managed-cluster-using-cm-cli-tool.adoc index c11a29fc8..3b60136ac 100644 --- a/modules/comm-deploying-managed-cluster-using-cm-cli-tool.adoc +++ b/modules/comm-deploying-managed-cluster-using-cm-cli-tool.adoc @@ -20,20 +20,20 @@ + [source,terminal] ---- -oc login +$ oc login --token= --server=https://api..:6443 ---- or + [source,terminal] ---- -export KUBECONFIG=~/ +$ export KUBECONFIG=~/ ---- . Run the following command: + [source,terminal] ---- -cm attach cluster --cluster --cluster-kubeconfig +$ cm attach cluster --cluster --cluster-kubeconfig ---- [role="_next-steps"] diff --git a/modules/retail-about.adoc b/modules/retail-about.adoc new file mode 100644 index 000000000..abdace095 --- /dev/null +++ b/modules/retail-about.adoc @@ -0,0 +1,31 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="about-retail-pattern"] += About the retail pattern + +This pattern demonstrates a pattern that models the store side of a retail application. + +It is derived from the https://quarkuscoffeeshop.github.io[Quarkus Coffeeshop Demo] created by Red Hat Solution Architects. The demo showcases the use of multiple microservices that interact through Kafka messaging and persist data in a PostgreSQL database. + +This demo pulls together several different strands of the demo and allows for multiple stores to be installed on remote clusters via ACM if the user desires. + +The demo allows users to go to the store’s web page, order drinks and food items, and see those items `made` and served by the microservices in real time. The pattern includes build pipelines and a demo space, so that changes to the applications can be tested prior to `production` deployments. + +[id="solution-elements"] +== Solution elements + +* How to use a GitOps approach to keep in control of configuration and operations +* How to centrally manage multiple clusters, including workloads +* How to build and deploy workloads across clusters using modern CI/CD +* How to architect a modern application using microservices and Kafka in Java + + +[id="rhel-technologies"] +== Red Hat Technologies + +* Red Hat OpenShift Container Platform (Kubernetes) +* Red Hat Advanced Cluster Management (Open Cluster Management) +* Red Hat OpenShift GitOps (ArgoCD) +* Red Hat OpenShift Pipelines (Tekton) +* Red Hat AMQ Streams (Apache Kafka Event Broker) diff --git a/modules/retail-architecture.adoc b/modules/retail-architecture.adoc new file mode 100644 index 000000000..a44fbf99d --- /dev/null +++ b/modules/retail-architecture.adoc @@ -0,0 +1,29 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="overview-architecture"] += Architecture + +The following diagram shows the relationship between the microservices, +messaging, and database components: + +link:/images/retail/retail-architecture.png[image:/images/retail/retail-architecture.png[Retail Pattern Architecture]] + +* The hub. This cluster hosts the CI/CD pipelines, a test instance of the applications and messaging/database services for testing purposes, and a single functional store. +* Optional remote clusters. Each remote site can support a complete store environment. The default one modelled is a "`RALEIGH`" store location. + +[id="demo-scenario"] +== Demo Scenario + +The Retail Validated Pattern/Demo Scenario showcases the Quarkus Coffeeshop retail experience. Rather than modeling the complexities of a full retail environment—such as item files, tax tables, and inventory management—it focuses on a subset of services to illustrate data flow, microservice interaction through APIs, messaging, and data persistence. + +* Web Service - the point of sale within the store. Shows the menu, and allows the user to order food and drinks, and shows when orders are ready. +* Counter service - the "`heart`" of the store operation - receives orders and dispatches them to the barista and kitchen services, as appropriate. Users may order as many food and drink items in one order as they wish. +* Barista - the service responsible for providing items from the +"`drinks`" side of the menu. +* Kitchen - the service responsible for providing items from the +"`food`" side of the menu. + +Further documentation on the individual services is available at the upstream https://quarkuscoffeeshop.github.io/[Quarkus Coffeeshop] documentation site. + +link:/images/retail/retail-highlevel.png[image:/images/retail/retail-highlevel.png[Demo Scenario]] \ No newline at end of file diff --git a/modules/retail-deploying.adoc b/modules/retail-deploying.adoc new file mode 100644 index 000000000..c6040ad91 --- /dev/null +++ b/modules/retail-deploying.adoc @@ -0,0 +1,202 @@ +:_content-type: PROCEDURE +:imagesdir: ../../../images + +[id="deploying-retail-pattern"] += Deploying the retail pattern + +.Prerequisites + +* An OpenShift cluster + ** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console]. + ** Select *OpenShift \-> Red Hat OpenShift Container Platform \-> Create cluster*. + ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. Verify that a dynamic StorageClass exists before creating one by running the following command: ++ +[source,terminal] +---- +$ oc get storageclass -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,DEFAULT:.metadata.annotations."storageclass\.kubernetes\.io/is-default-class" +---- ++ +.Example output ++ +[source,terminal] +---- +NAME PROVISIONER DEFAULT +gp2-csi ebs.csi.aws.com +gp3-csi ebs.csi.aws.com true +---- ++ +For more information about creating a dynamic `StorageClass`, see the https://docs.openshift.com/container-platform/latest/storage/dynamic-provisioning.html[Dynamic provisioning] documentation. + +* Optional: A second OpenShift cluster for multicloud demonstration. +//Replaced git and podman prereqs with the tooling dependencies page +* Optional: A quay account that can update images; this is if you want to use the pipelines to customize the applications. +* Optional: A quay account with the following repositories set as public, and which you can write to: +** quay.io/your-quay-username/quarkuscoffeeshop-barista +** quay.io/your-quay-username/quarkuscoffeeshop-counter +** quay.io/your-quay-username/quarkuscoffeeshop-inventory +** quay.io/your-quay-username/quarkuscoffeeshop-web +** quay.io/your-quay-username/quarkuscoffeeshop-customerloyalty +** quay.io/your-quay-username/quarkuscoffeeshop-kitchen +** quay.io/your-quay-username/quarkuscoffeeshop-majestic-monolith +** quay.io/your-quay-username/quarkuscoffeeshop-monolith ++ +[NOTE] +==== +These repos contain the demo's microservices. The public repos (`quay.io/hybridcloudpatterns/*`) provide pre-built images used by default, allowing the demo to run without rebuilding the apps. Creating your own quay copies offers transparency and lets you reproduce results or customize the apps. +==== + +* https://validatedpatterns.io/learn/quickstart/[Install the tooling dependencies]. + +The use of this pattern depends on having at least one running Red Hat OpenShift cluster. However, consider creating a cluster for deploying the GitOps management hub assets and a separate cluster for the managed cluster. + +If you do not have a running Red Hat OpenShift cluster, you can start one on a public or private cloud by using https://console.redhat.com/openshift/create[Red Hat Hybrid Cloud Console]. + +.Procedure + +. Fork the https://github.com/validatedpatterns/retail[retail] repository on GitHub. + +. Clone the forked copy of this repository. ++ +[source,terminal] +---- +$ git clone git@github.com:your-username/retail.git +---- + +. Create a local copy of the secret values file that can safely include credentials. Run the following commands: ++ +[source,terminal] +---- +$ cp values-secret.yaml.template ~/values-secret.yaml +---- + +. Edit `values-secret.yaml` populating with your quay `username` and `password`. ++ +[source,yaml] +---- +# NEVER COMMIT THESE VALUES TO GIT +version: "2.0" +secrets: + # These are credentials to allow you to push to your image registry (quay.io) for application images + - name: imageregistry + fields: + # eg. Quay -> Robot Accounts -> Robot Login + - name: username + value: "my-quay-username" + - name: password + value: "my-quay-password" +---- ++ +[NOTE] +==== +Do not commit this file. You do not want to push personal credentials to GitHub. +==== + +. Customize the deployment for your cluster by following these steps: + +.. Create a new branch named my-branch and switch to it by running the following command: ++ +[source,terminal] +---- +$ git switch -c my-branch +---- + +.. Edit the `values-hub.yaml` file to customize the deployment for your cluster by running the following command: ++ +[source,terminal] +---- +$ vi values-global.yaml +---- ++ +The defaults should suffice if you just want to see the apps running. The values that you might change are under the `imageregistry`, if you copied the images to your own quay account and hostname. If you like, you can change the `git` settings of `account`, `email` and `hostname` to reflect your own account settings. ++ +If you plan to customize the build of the applications themselves, there are `revision` and `imageTag` settings for each of them. + +.. Stage the changes to the `values-hub.yaml` file by running the following commands: ++ +[source,terminal] +---- +$ git add values-global.yaml +---- + +.. Commit the changes to the `values-hub.yaml` file by running the following commands: ++ +[source,terminal] +---- +$ git commit -m "update deployment for my-branch" +---- + +.. Push the changes to the `values-global.yaml` file by running the following command: ++ +[source,terminal] +---- +$ git push origin my-branch +---- + +. Deploy the pattern by running `./pattern.sh make install` or by using the link:/infrastructure/using-validated-pattern-operator/[Validated Patterns Operator]. + +[id="deploying-cluster-using-patternsh-file"] +== Deploying the pattern by using the pattern.sh script + +To deploy the pattern by using the `pattern.sh` script, complete the following steps: + +. Log in to your cluster by running the following: + +.. Obtain an API token by visiting https://oauth-openshift.apps../oauth/token/request + +.. Log in with this retrieved token by running the following command: ++ +[source,terminal] +---- +$ oc login --token= --server=https://api..:6443 +---- + +. Alternatively log in by running the following command: ++ +[source,terminal] +---- +$ export KUBECONFIG=~/ +---- + +. Deploy the pattern to your cluster by running the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make install +---- + +[id="verify-trvlops-pattern-install"] +== Verify the retail pattern installation + +. Verify that the Operators have been installed. + + .. To verify, in the OpenShift Container Platform web console, navigate to *Operators → Installed Operators* page. + + .. Set your project to `All Projects` and verify the operators are installed and have a status of `Succeeded`. ++ +image:/images/retail/retail-v1-operators.png[retail-v1-operators] + +. Track the progress through the Hub ArgoCD UI from the nines menu: ++ +image:/images/retail/retail-v1-console-menu.png[retail-v1-console-menu] + +. Ensure that the Hub ArgoCD instance shows all of its apps in Healthy and Synced status once all of the images have been built: ++ +image:/images/retail/retail-v1-argo-apps-p1.png[retail-v1-argo-apps-p1] + +. Check on the pipelines, if you chose to run them. They should all complete successfully: ++ +image:/images/retail/retail-v1-pipelines.png[retail-v1-pipelines] + +. Go to the *Quarkus Coffeeshop Landing Page* where you are presented with the applications in the pattern: ++ +image:/images/retail/retail-v1-landing-page.png[retail-v1-landing-page] + +. Click the *Store Web Page* to open the Quarkus Coffeeshop Demo: ++ +image:/images/retail/retail-v1-store-page.png[retail-v1-store-page] + +. Click the *TEST Store Web Page* to open a separate copy of the same demo. + +. Clicking the respective *Kafdrop* links to go to a Kafdrop instance that allows inspection of each of the respective environments. ++ +image:/images/retail/retail-v1-kafdrop.png[retail-v1-kafdrop] diff --git a/modules/retail-example-applications.adoc b/modules/retail-example-applications.adoc new file mode 100644 index 000000000..073d7b81b --- /dev/null +++ b/modules/retail-example-applications.adoc @@ -0,0 +1,50 @@ +:_content-type: PROCEDURE +:imagesdir: ../../../images + +[id="deploying-mcg-pattern"] += Demonstrating Retail example applications + +Up until now the retail validated pattern has focused primarily on successfully deploying the architectural pattern. Now it is time to see +the actual applications running as we have deployed them. + +If you have already deployed the hub cluster, then you have already seen several applications deployed in the OpenShift GitOps console. If you +have not done this then we recommend you deploy the hub after you have setup the Quay repositories described below. + +== Ordering Items at the Coffeeshop + +The easiest way to get to the coffeeshop store page is from the OpenShift Console Menu Landing Page entry: + +image:/images/retail/retail-v1-console-menu.png[retail-v1-console-menu] + +. Click the *Quarkus Coffeeshop Landing Page* link will bring you to this page: ++ +image:/images/retail/retail-v1-landing-page.png[retail-v1-landing-page] + +. Select either the `Store Web Page` or `TEST Store Web Page` links brings you to a screen that looks like this: ++ +image:/images/retail/retail-v1-store-page.png[retail-v1-store-page] ++ +[NOTE] +==== +The applications are initially identical. The `TEST`" site is deployed to the `quarkuscoffeeshop-demo` namespace; the regular Store site is deployed to the `quarkuscoffeeshop-store` namespace. + +Each store requires supporting services, in PostgreSQL and Kafka. In our pattern, PostgreSQL is provided by the Crunchy PostgreSQL operator, and Kafka is provided by the Red Hat AMQ Streams operator. Each instance, the regular instance and the TEST instance, has its own instance of each of these supporting services it uses. +==== + +. Order by clicking the `Place an Order` button on the front page. The menu should look like this: ++ +image:/images/retail/retail-v1-store-web-menu.png[retail-v1-store-web-menu] + +. Click the `Add` button next to a menu item; the item name will appear. Add a name for the order: ++ +image:/images/retail/retail-v1-order-p1.png[image:/images/retail/retail-v1-order-p1.png[retail-v1-order-p1] + +. Add as many orders as you want. On your last item, click the `Place Order` button on the item dialog: ++ +image:/images/retail/retail-v1-place-order.png[image:/images/retail/retail-v1-place-order.png[retail-v1-place-order] + +As the orders are serviced by the barista and kitchen services, you can see their status in the `Orders` section of the page: + +image:/images/retail/retail-v1-orders-status.png[retail-v1-orders-status] + + diff --git a/modules/retail-understanding-rhacm-requirements.adoc b/modules/retail-understanding-rhacm-requirements.adoc new file mode 100644 index 000000000..91cf91315 --- /dev/null +++ b/modules/retail-understanding-rhacm-requirements.adoc @@ -0,0 +1,72 @@ +:_content-type: CONCEPT +:imagesdir: ../../images + +[id="understanding-acm-requirements-managed-cluster"] += Understanding Red Hat Advanced Cluster Management requirements + +By default, Red Hat Advanced Cluster Management (RHACM) manages the `clusterGroup` applications that are deployed on all clusters. + +Add a `managedClusterGroup` for each cluster or group of clusters that you want to manage by following this procedure. + +.Procedure + +. Switch to your locally created feature branch by running the following command: ++ +[source,terminal] +---- +$ git checkout my-branch main +---- + +. In the `value-hub.yaml` file, a `managedClusterGroup` `raleigh` already exists as shown in this yaml extract: ++ +[source,yaml] +---- +managedClusterGroups: + raleigh: + name: store-raleigh + helmOverrides: + # Values must be strings! + - name: clusterGroup.isHubCluster + value: "false" + clusterSelector: + matchLabels: + clusterGroup: store-raleigh + matchExpressions: + - key: vendor + operator: In + values: + - OpenShift +---- ++ +The YAML file segment defines the `raleigh` managed cluster group, which deploys `clusterGroup` applications on clusters labeled with `clusterGroup=store-raleigh`. The clusterSelector ensures that only clusters with the `clusterGroup=store-raleigh` label and the `vendor=OpenShift` label are included in this group. Specific subscriptions, Operators, applications, and projects for this clusterGroup are managed through the values-store-raleigh.yaml file.. + +. To add a new `managedClusterGroup`, add a new entry to the `managedClusterGroups` block in the `values-hub.yaml` file as follows: ++ +[source,yaml] +---- +charlotte: + name: store-charlotte + helmOverrides: + - name: clusterGroup.isHubCluster + value: "false" + clusterSelector: + matchLabels: + clusterGroup: store-charlotte + matchExpressions: + - key: vendor + operator: In + values: + - OpenShift +---- ++ +[NOTE] +==== +The `charlotte` cluster group is managed separately, using its own `values-store-charlotte.yaml` file. +==== + +. Make a copy of the `values-store-raleigh.yaml` file and name it `values-store-charlotte.yaml`. Update the file with the appropriate values for the `charlotte` cluster group. + +[IMPORTANT] +==== +Ensure that you commit the changes and push them to GitHub so that GitOps can fetch your changes and apply them. +==== \ No newline at end of file diff --git a/static/images/retail/retail-v1-landing-page-1.png b/static/images/retail/retail-v1-landing-page-1.png new file mode 100644 index 000000000..24a472ec2 Binary files /dev/null and b/static/images/retail/retail-v1-landing-page-1.png differ diff --git a/static/images/retail/retail-v1-landing-page.png b/static/images/retail/retail-v1-landing-page.png index fff5c07b8..d7798a0bd 100644 Binary files a/static/images/retail/retail-v1-landing-page.png and b/static/images/retail/retail-v1-landing-page.png differ