diff --git a/.wordlist.txt b/.wordlist.txt index 27406e786..126a220ff 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -141,6 +141,7 @@ devel devops devsecops dhhph +diag differentiator dnf dns diff --git a/config.yaml b/config.yaml index 4cf35ebcd..abec6173e 100644 --- a/config.yaml +++ b/config.yaml @@ -13,7 +13,7 @@ security: markup: asciidocExt: - attributes: {allow-uri-read, source-highlighter: rouge, icons: font, sectanchors} + attributes: {allow-uri-read, source-highlighter: rouge, icons: font, sectanchors, showtitle} safeMode: unsafe imagesdir: images tableOfContents: diff --git a/content/patterns/medical-diagnosis/_index.adoc b/content/patterns/medical-diagnosis/_index.adoc index 6595d06eb..fa6cf967d 100644 --- a/content/patterns/medical-diagnosis/_index.adoc +++ b/content/patterns/medical-diagnosis/_index.adoc @@ -2,7 +2,7 @@ title: Medical Diagnosis date: 2021-01-19 validated: true -summary: This pattern is based on a demo implementation of an automated data pipeline for chest x-ray analysis previously developed by Red Hat. +summary: This pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis previously developed by Red Hat. products: - Red Hat OpenShift Container Platform - Red Hat OpenShift Serverless @@ -24,69 +24,82 @@ ci: medicaldiag :_content-type: ASSEMBLY include::modules/comm-attributes.adoc[] -== Background +//Module to be included +//:_content-type: CONCEPT +//:imagesdir: ../../images +[id="about-med-diag-pattern"] += About the {med-pattern} -This Validated Pattern is based on a demo implementation of an automated data pipeline for chest Xray -analysis previously developed by Red Hat. The original demo can be found link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs. +Background:: -This validated pattern includes the same functionality as the original demonstration. The difference is -that we use the _GitOps_ framework to deploy the pattern including operators, creation of namespaces, -and cluster configuration. Using GitOps provides a much more efficient means of doing continuous deployment. +This validated pattern is based on a demo implementation of an automated data pipeline for chest X-ray analysis that was previously developed by {redhat}. You can find the original demonstration link:https://github.com/red-hat-data-services/jumpstart-library[here]. It was developed for the US Department of Veteran Affairs. -What does this pattern do?: +This validated pattern includes the same functionality as the original demonstration. The difference is that this solution uses the GitOps framework to deploy the pattern including Operators, creation of namespaces, and cluster configuration. Using GitOps provides an efficient means of implementing continuous deployment. -* Ingest chest Xrays from a simulated Xray machine and puts them into an objectStore based on Ceph. -* The objectStore sends a notification to a Kafka topic. -* A KNative Eventing Listener to the topic triggers a KNative Serving function. +Workflow:: + +* Ingest chest X-rays from a simulated X-ray machine and puts them into an `objectStore` based on Ceph. +* The `objectStore` sends a notification to a Kafka topic. +* A KNative Eventing listener to the topic triggers a KNative Serving function. * An ML-trained model running in a container makes a risk assessment of Pneumonia for incoming images. -* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed and anonymized, as well as full metrics collected from Prometheus. +* A Grafana dashboard displays the pipeline in real time, along with images incoming, processed, anonymized, and full metrics collected from Prometheus. This pipeline is showcased link:https://www.youtube.com/watch?v=zja83FVsm14[in this video]. image::medical-edge/dashboard.png[link="/images/medical-edge/dashboard.png"] -This validated pattern is still under development. Any questions or concerns -please contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio]. +//[NOTE] +//==== +//This validated pattern is still under development. If you have any questions or concerns contact mailto:jrickard@redhat.com[Jonny Rickard] or mailto:claudiol@redhat.com[Lester Claudio]. +//==== + +[id="about-solution-med"] +== About the solution elements -=== Solution elements +The solution aids the understanding of the following: -* How to use a GitOps approach to keep in control of configuration and operations +* How to use a GitOps approach to keep in control of configuration and operations. * How to deploy AI/ML technologies for medical diagnosis using GitOps. -=== Red Hat Technologies +The {med-pattern} uses the following products and technologies: -* {rh-ocp} (Kubernetes) -* {rh-gitops} (ArgoCD) -* Red Hat AMQ Streams (Apache Kafka Event Broker) -* Red Hat OpenShift Serverless (Knative Eventing, Knative Serving) -* Red Hat OpenShift Data Foundations (Cloud Native storage) -* Grafana dashboard (OpenShift Grafana Operator) -* Open Data Hub +* {rh-ocp} for container orchestration +* {rh-gitops}, a GitOps continuous delivery (CD) solution +* {rh-amq-first}, an event streaming platform based on the Apache Kafka +* {rh-serverless-first} for event-driven applications +* {rh-ocp-data-first} for cloud native storage capabilities +* {grafana-op} to manage and share Grafana dashboards, data sources, and so on * S3 storage -== Architecture +[id="about-architecture-med"] +== About the architecture -In this iteration of the pattern *there is no edge component* . Future releases have planned Edge deployment capabilities as part of the pattern architecture. +[IMPORTANT] +==== +Presently, the {med-pattern} does not have an edge component. Edge deployment capabilities are planned as part of the pattern architecture for a future release. +==== image::medical-edge/edge-medical-diagnosis-marketing-slide.png[link="/images/medical-edge/edge-medical-diagnosis-marketing-slide.png"] -Components are running on OpenShift either at the data center or at the medical facility (or public cloud running OpenShift). +Components are running on OpenShift either at the data center, at the medical facility, or public cloud running OpenShift. -=== Physical Schema +[id="about-physical-schema-med"] +=== About the physical schema -The diagram below shows the components that are deployed with the various networks that connect them. +The following diagram shows the components that are deployed with the various networks that connect them. image::medical-edge/physical-network.png[link="/images/medical-edge/physical-network.png"] -The diagram below shows the components that are deployed with the the data flows and API calls between them. +The following diagram shows the components that are deployed with the the data flows and API calls between them. image::medical-edge/physical-dataflow.png[link="/images/medical-edge/physical-dataflow.png"] -== Recorded Demo +== Recorded demo link:/videos/xray-deployment.svg[image:/videos/xray-deployment.svg[Demo\]] -== What Next +[id="next-steps_med-diag-index"] +== Next steps * Getting started link:getting-started[Deploy the Pattern] -* Visit the link:https://github.com/hybrid-cloud-patterns/medical-diagnosis[repository] +//We have relevant links on the patterns page diff --git a/content/patterns/medical-diagnosis/cluster-sizing.adoc b/content/patterns/medical-diagnosis/cluster-sizing.adoc index 0df2e78c0..f6daf3334 100644 --- a/content/patterns/medical-diagnosis/cluster-sizing.adoc +++ b/content/patterns/medical-diagnosis/cluster-sizing.adoc @@ -7,50 +7,15 @@ aliases: /medical-diagnosis/cluster-sizing/ :toc: :imagesdir: /images :_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] -[id="tested-platforms-cluster-sizing"] -== Tested Platforms +//Module to be included +//:_content-type: CONCEPT +//:imagesdir: ../../images +[id="about-openshift-cluster-sizing-med"] += About OpenShift cluster sizing for the {med-pattern} -The *Medical Diagnosis* pattern has been tested in the following Certified Cloud Providers. - -|=== -| *Certified Cloud Providers* | 4.8 | 4.9 | 4.10 | 4.11 - -| Amazon Web Services -| Tested -| Tested -| Tested -| Tested - -| Google Compute -| -| -| -| - -| Microsoft Azure -| -| -| -| -|=== - - -[id="general-openshift-minimum-requirements-cluster-sizing"] -== General OpenShift Minimum Requirements - -OpenShift 4 has the following minimum requirements for sizing of nodes: - -* *Minimum 4 vCPU* (additional are strongly recommended). -* *Minimum 16 GB RAM* (additional memory is strongly recommended, especially if etcd is colocated on Control Planes). -* *Minimum 40 GB* hard disk space for the file system containing /var/. -* *Minimum 1 GB* hard disk space for the file system containing /usr/local/bin/. - - -[id="medical-diagnosis-pattern-components-cluster-sizing"] -=== Medical Diagnosis Pattern Components - -Here's an inventory of what gets deployed by the *Medical Diagnosis* pattern: +To understand cluster sizing requirements for the {med-pattern}, consider the following components that the {med-pattern} deploys on the datacenter or the hub OpenShift cluster: |=== | Name | Kind | Namespace | Description @@ -60,224 +25,79 @@ Here's an inventory of what gets deployed by the *Medical Diagnosis* pattern: | medical-diagnosis-hub | Hub GitOps management -| Red Hat OpenShift GitOps +| {rh-gitops} | Operator | openshift-operators -| OpenShift GitOps +| {rh-gitops-short} -| Red Hat OpenShift Data Foundations +| {rh-ocp-data-first} | Operator | openshift-storage | Cloud Native storage solution -| Red Hat AMQStreams (Apache Kafka) +| {rh-amq-streams} | Operator | openshift-operators | AMQ Streams provides Apache Kafka access -| Red Hat OpenShift Serverless +| {rh-serverless-first} | Operator -| - knative-eventing -- knative-serving -| Provides access to knative eventing and serving functions +| - knative-serving (knative-eventing) +| Provides access to Knative Serving and Eventing functions |=== +//AI: Removed the following since we have CI status linked on the patterns page +//[id="tested-platforms-cluster-sizing"] +//== Tested Platforms + +: Removed the following in favor of the link to OCP docs +//[id="general-openshift-minimum-requirements-cluster-sizing"] +//== General OpenShift Minimum Requirements +The minimum requirements for an {ocp} cluster depend on your installation platform. For instance, for AWS, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_aws/preparing-to-install-on-aws.html#requirements-for-installing-ocp-on-aws[Installing {ocp} on AWS], and for bare-metal, see link:https://docs.openshift.com/container-platform/4.13/installing/installing_bare_metal/installing-bare-metal.html#installation-minimum-resource-requirements_installing-bare-metal[Installing {ocp} on bare metal]. + +For information about requirements for additional platforms, see link:https://docs.openshift.com/container-platform/4.13/installing/installing-preparing.html[{ocp} documentation]. -[id="medical-diagnosis-pattern-openshift-cluster-size-cluster-sizing"] -=== Medical Diagnosis Pattern OpenShift Cluster Size +//Module to be included +//:_content-type: CONCEPT +//:imagesdir: ../../images -The Medical Diagnosis pattern has been tested with a defined set of configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture. +[id="med-openshift-cluster-size"] +=== About {med-pattern} OpenShift cluster size -The OpenShift cluster for the *Medical Diagnosis* pattern needs to be sized a bit larger to support the compute and storage demands of OpenShift Data Foundations and other operators that make up the pattern. The above cluster sizing is close to a *minimum* size for an OpenShift cluster supporting this pattern. In the next few sections we take some snapshots of the cluster utilization while the *Medical Diagnosis* pattern is running. Keep in mind that resources will have to be added as more developers are working building their applications. +The {med-pattern} has been tested with a defined set of configurations that represent the most common combinations that {ocp} customers are using for the x86_64 architecture. + +For {med-pattern}, the OpenShift cluster size must be a bit larger to support the compute and storage demands of OpenShift Data Foundations and other Operators. +//AI:Removed a few lines from here since the content is updated to remove any ambuguity. We rather use direct links (OCP docs/ GCP/AWS/Azure) +[NOTE] +==== +You might want to add resources when more developers are working on building their applications. +==== The OpenShift cluster is a standard deployment of 3 control plane nodes and 3 or more worker nodes. [cols="^,^,^,^"] |=== -| Node Type | Number of nodes | Cloud Provider | Instance Type +| Node type | Number of nodes | Cloud provider | Instance type -| Control Plane/Worker -| 3 +| Control plane and worker +| 3 and 3 | Google Cloud | n1-standard-8 -| Control Plane/Worker -| 3 +| Control plane and worker +| 3 and 3 | Amazon Cloud Services | m5.2xlarge -| Control Plane/Worker -| 3 +| Control plane and worker +| 3 and 3 | Microsoft Azure | Standard_D8s_v3 |=== -[id="aws-instance-types-cluster-sizing"] -=== AWS Instance Types - -The *Medical Diagnosis* pattern was tested with the highlighted AWS instances in *bold*. The OpenShift installer will let you know if the instance type meets the minimum requirements for a cluster. - -The message that the openshift installer will give you will be similar to this message - -[,text] ----- -INFO Credentials loaded from default AWS environment variables -FATAL failed to fetch Metadata: failed to load asset "Install Config": [controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 4 vCPUs, controlPlane.platform.aws.type: Invalid value: "m4.large": instance type does not meet minimum resource requirements of 16384 MiB Memory] ----- - -Below you can find a list of the AWS instance types that can be used to deploy the *Medical Diagnosis* pattern. - -[cols="^,^,^,^,^"] -|=== -| Instance type | Default vCPUs | Memory (GiB) | Hub | Factory/Edge - -| -| -| -| 3x3 OCP Cluster -| 3 Node OCP Cluster - -| m4.xlarge -| 4 -| 16 -| N -| N - -| *m4.2xlarge* -| 8 -| 32 -| Y -| Y - -| m4.4xlarge -| 16 -| 64 -| Y -| Y - -| m4.10xlarge -| 40 -| 160 -| Y -| Y - -| m4.16xlarge -| 64 -| 256 -| Y -| Y - -| *m5.xlarge* -| 4 -| 16 -| Y -| N - -| *m5.2xlarge* -| 8 -| 32 -| Y -| Y - -| *m5.4xlarge* -| 16 -| 64 -| Y -| Y - -| m5.8xlarge -| 32 -| 128 -| Y -| Y - -| m5.12xlarge -| 48 -| 192 -| Y -| Y - -| m5.16xlarge -| 64 -| 256 -| Y -| Y - -| m5.24xlarge -| 96 -| 384 -| Y -| Y -|=== - -The OpenShift cluster is made of 3 Control Plane nodes and 3 Workers. For the node sizes we used the *m5.4xlarge* on AWS and this instance type met the minimum requirements to deploy the *Medical Diagnosis* pattern successfully. - -To understand better what types of nodes you can use on other Cloud Providers we provide some of the details below. - -[id="azure-instance-types-cluster-sizing"] -=== Azure Instance Types - -The *Medical Diagnosis* pattern was also deployed on Azure using the *Standard_D8s_v3* VM size. Below is a table of different VM sizes available for Azure. Keep in mind that due to limited access to Azure we only used the *Standard_D8s_v3* VM size. - -The OpenShift cluster is made of 3 Control Plane nodes and 3 Workers. - -|=== -| Type | Sizes | Description - -| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-general[General purpose] -| B, Dsv3, Dv3, Dasv4, Dav4, DSv2, Dv2, Av2, DC, DCv2, Dv4, Dsv4, Ddv4, Ddsv4 -| Balanced CPU-to-memory ratio. Ideal for testing and development, small to medium databases, and low to medium traffic web servers. - -| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-compute[Compute optimized] -| F, Fs, Fsv2, FX -| High CPU-to-memory ratio. Good for medium traffic web servers, network appliances, batch processes, and application servers. - -| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-memory[Memory optimized] -| Esv3, Ev3, Easv4, Eav4, Ev4, Esv4, Edv4, Edsv4, Mv2, M, DSv2, Dv2 -| High memory-to-CPU ratio. Great for relational database servers, medium to large caches, and in-memory analytics. - -| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-storage[Storage optimized] -| Lsv2 -| High disk throughput and IO ideal for Big Data, SQL, NoSQL databases, data warehousing and large transactional databases. - -| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-gpu[GPU] -| NC, NCv2, NCv3, NCasT4_v3, ND, NDv2, NV, NVv3, NVv4 -| Specialized virtual machines targeted for heavy graphic rendering and video editing, as well as model training and inferencing (ND) with deep learning. Available with single or multiple GPUs. - -| https://docs.microsoft.com/en-us/azure/virtual-machines/sizes-hpc[High performance compute] -| HB, HBv2, HBv3, HC, H -| Our fastest and most powerful CPU virtual machines with optional high-throughput network interfaces (RDMA). -|=== - -For more information please refer to the https://docs.microsoft.com/en-us/azure/virtual-machines/sizes[Azure VM Size Page]. - -[id="google-cloud-gcp-instance-types-cluster-sizing"] -=== Google Cloud (GCP) Instance Types - -The *Medical Diagnosis* pattern was also deployed on GCP using the *n1-standard-8* VM size. Below is a table of different VM sizes available for GCP. Keep in mind that due to limited access to GCP we only used the *n1-standard-8* VM size. - -The OpenShift cluster is made of 3 Control Plane and 3 Workers cluster. - -The following table provides VM recommendations for different workloads. - -|=== -| *General purpose* | *Workload optimized* | | | | - -| Cost-optimized | Balanced | Scale-out optimized | Memory-optimized | Compute-optimized | Accelerator-optimized - -| E2 -| N2, N2D, N1 -| T2D -| M2, M1 -| C2 -| A2 - -| Day-to-day computing at a lower cost -| Balanced price/performance across a wide range of VM shapes -| Best performance/cost for scale-out workloads -| Ultra high-memory workloads -| Ultra high performance for compute-intensive workloads -| Optimized for high performance computing workloads -|=== - -For more information please refer to the https://cloud.google.com/compute/docs/machine-types[GCP VM Size Page]. +[role="_additional-resources"] +.Additional resource +* link:https://aws.amazon.com/ec2/instance-types/[AWS instance types] +* link:https://learn.microsoft.com/en-us/azure/virtual-machines/sizes[Azure instance types: Sizes for virtual machines in Azure] +* link:https://cloud.google.com/compute/docs/machine-resource[Google Cloud Platform instance types: Machine families resource and comparison guide] +//Removed section for instance types as we did for MCG diff --git a/content/patterns/medical-diagnosis/getting-started.adoc b/content/patterns/medical-diagnosis/getting-started.adoc index df2e38879..10aa8e19a 100644 --- a/content/patterns/medical-diagnosis/getting-started.adoc +++ b/content/patterns/medical-diagnosis/getting-started.adoc @@ -7,231 +7,294 @@ aliases: /medical-diagnosis/getting-started/ :toc: :imagesdir: /images :_content-type: ASSEMBLY - -== Prerequisites - -. An OpenShift cluster (Go to https://console.redhat.com/openshift/create[the OpenShift console]). Cluster must have a dynamic StorageClass to provision PersistentVolumes. See also link:../../medical-diagnosis/cluster-sizing[sizing your cluster]. -. A GitHub account (and a token for it with repositories permissions, to read from and write to your forks) -. S3-capable Storage set up in your public/private cloud for the x-ray images -. The helm binary, see link:https://helm.sh/docs/intro/install/[here] - +include::modules/comm-attributes.adoc[] + +//Module to be included +//:_content-type: PROCEDURE +//:imagesdir: ../../../images +[id="deploying-med-pattern"] += Deploying the {med-pattern} + +.Prerequisites + +* An OpenShift cluster + ** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console]. + ** Select *OpenShift \-> Clusters \-> Create cluster*. + ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. See link:../../medical-diagnosis/cluster-sizing[sizing your cluster]. +* A GitHub account and a token for it with repositories permissions, to read from and write to your forks. +. An S3-capable Storage set up in your public or private cloud for the x-ray images +. The Helm binary, see link:https://helm.sh/docs/intro/install/[Installing Helm] For installation tooling dependencies, see link:https://validatedpatterns.io/learn/quickstart/[Patterns quick start]. -The use of this pattern depends on having a Red Hat OpenShift cluster. In this version of the validated pattern -there is no dedicated Hub / Edge cluster for the *Medical Diagnosis* pattern. - -If you do not have a running Red Hat OpenShift cluster you can start one on a -public or private cloud by using link:https://console.redhat.com/openshift/create[Red Hat's cloud service]. +[NOTE] +==== +The {med-pattern} does not have a dedicated hub or edge cluster. +==== [id="setting-up-an-s3-bucket-for-the-xray-images-getting-started"] === Setting up an S3 Bucket for the xray-images -An S3 bucket is required for image processing. Please see the <> section below for creating a bucket in AWS S3. The following links provide information on how to create the buckets required for this validated pattern on several cloud providers. +An S3 bucket is required for image processing. +For information about creating a bucket in AWS S3, see the <> section. + +For information about creating the buckets on other cloud providers, see the following links: * link:https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html[AWS S3] * link:https://docs.microsoft.com/en-us/azure/storage/common/storage-account-create?tabs=azure-portal[Azure Blob Storage] * link:https://cloud.google.com/storage/docs/quickstart-console[GCP Cloud Storage] +//Module to be included +//:_content-type: PROCEDURE +//:imagesdir: ../../../images + [id="utilities"] = Utilities +//AI: Update the use of community and VP post naming tier update -A number of utilities have been built by the validated patterns team to lower the barrier to entry for using the community or Red Hat Validated Patterns. To use these utilities you will need to export some environment variables for your cloud provider: +To use the link:https://github.com/validatedpatterns/utilities[utilities] that are available, export some environment variables for your cloud provider. For example: -For AWS (replace with your keys): +or AWS: -[,sh] +[source,terminal] ---- -export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX -export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX +export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX # replace with your key +export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX # replace with your key ---- -Then we need to create the S3 bucket and copy over the data from the validated patterns public bucket to the created bucket for your demo. You can do this on the cloud providers console or use the scripts provided on link:https://github.com/hybrid-cloud-patterns/utilities/[validated-patterns-utilities] repository. +Create the S3 bucket and copy over the data from the validated patterns public bucket to the created bucket for your demo. You can do this on the cloud providers console or you can use the scripts that are provided in link:https://github.com/validatedpatterns/utilities[utilities] repository. -[,sh] +[source,terminal] ---- -python s3-create.py -b mytest-bucket -r us-west-2 -p -python s3-sync-buckets.py -s com.validated-patterns.xray-source -t mytest-bucket -r us-west-2 +$ python s3-create.py -b mytest-bucket -r us-west-2 -p +$ python s3-sync-buckets.py -s validated-patterns-md-xray -t mytest-bucket -r us-west-2 ---- -The output should look similar to this edited/compressed output. +.Example output image:/videos/bucket-setup.svg[Bucket setup] -Keep note of the name of the bucket you created, as you will need it for further pattern configuration. -There is some key information you will need to take note of that is required by the 'values-global.yaml' file. You will need the URL for the bucket and its name. At the very end of the `values-global.yaml` file you will see a section for `s3:` were these values need to be changed. +Note the name and URL for the bucket for further pattern configuration. For example, you must update these values in a `values-global.yaml` file, where there is a section for `s3:` -[id="preparation"] -= Preparation +[id="preparing-for-deployment"] += Preparing for deployment +.Procedure -. Fork the link:https://github.com/hybrid-cloud-patterns/medical-diagnosis[medical-diagnosis] repo on GitHub. It is necessary to fork because your fork will be updated as part of the GitOps and DevOps processes. +. Fork the link:https://github.com/validatedpatterns/medical-diagnosis[medical-diagnosis] repository on GitHub. You must fork the repository because your fork will be updated as part of the GitOps and DevOps processes. . Clone the forked copy of this repository. + -[,sh] +[source,terminal] ---- -git clone git@github.com:/medical-diagnosis.git +$ git clone git@github.com:/medical-diagnosis.git ---- -. Create a local copy of the Helm values file that can safely include credentials +. Create a local copy of the Helm values file that can safely include credentials. + -*DO NOT COMMIT THIS FILE* +[WARNING] +==== +Do not commit this file. You do not want to push personal credentials to GitHub. +==== + -You do not want to push credentials to GitHub. +Run the following commands: + -[,sh] +[source,terminal] ---- -cp values-secret.yaml.template ~/values-secret.yaml -vi ~/values-secret.yaml +$ cp values-secret.yaml.template ~/values-secret-medical-diagnosis.yaml +$ vi ~/values-secret-medical-diagnosis.yaml ---- - -*values-secret.yaml example* ++ +.Example `values-secret.yaml` file [source,yaml] ---- +version "2.0" secrets: - xraylab: - database-user: xraylab - database-password: ## Insert your custom password here ## - database-root-password: ## Insert your custom password here ## - database-host: xraylabdb - database-db: xraylabdb - database-master-user: xraylab - database-master-password: ## Insert your custom password here ## - - grafana: - GF_SECURITY_ADMIN_PASSWORD: ## Insert your custom password here ## - GF_SECURITY_ADMIN_USER: root ----- - -When you edit the file you can make changes to the various DB and Grafana passwords if you wish. - -. Customize the `values-global.yaml` for your deployment + # NEVER COMMIT THESE VALUES TO GIT + + # Database login credentials and configuration + - name: xraylab + fields: + - name: database-user + value: xraylab + - name: database-host + value: xraylabdb + - name: database-db + value: xraylabdb + - name: database-master-user + value: xralab + - name: database-password + onMissingValue: generate + vaultPolicy: validatedPatternDefaultPolicy + - name: database-root-password + onMissingValue: generate + vaultPolicy: validatedPatternDefaultPolicy + - name: database-master-password + onMissingValue: generate + vaultPolicy: validatedPatternDefaultPolicy + + # Grafana Dashboard admin user/password + - name: grafana + fields: + - name: GF_SECURITY_ADMIN_USER: + value: root + - name: GF_SECURITY_ADMIN_PASSWORD: + onMissingValue: generate + vaultPolicy: validatedPatternDefaultPolicy +---- ++ +By default, Vault password policy generates the passwords for you. However, you can create your own passwords. ++ +[NOTE] +==== +When defining a custom password for the database users, avoid using the `$` special character as it gets interpreted by the shell and will ultimately set the incorrect desired password. +==== + +. To customize the deployment for your cluster, update the `values-global.yaml` file by running the following commands: ++ +[source,terminal] +---- +$ git checkout -b my-branch +$ vi values-global.yaml +---- ++ +Replace instances of PROVIDE_ with your specific configuration + -[,sh] ----- -git checkout -b my-branch -vi values-global.yaml ----- - -*Replace instances of PROVIDE_ with your specific configuration* - [source,yaml] ---- ...omitted datacenter: - cloudProvider: PROVIDE_CLOUD_PROVIDER #aws, azure - storageClassName: PROVIDE_STORAGECLASS_NAME #gp2 (aws) - region: PROVIDE_CLOUD_REGION #us-east-1 + cloudProvider: PROVIDE_CLOUD_PROVIDER #AWS, AZURE, GCP + storageClassName: PROVIDE_STORAGECLASS_NAME #gp3-csi + region: PROVIDE_CLOUD_REGION #us-east-2 clustername: PROVIDE_CLUSTER_NAME #OpenShift clusterName - domain: PROVIDE_DNS_DOMAIN #blueprints.rhecoeng.com + domain: PROVIDE_DNS_DOMAIN #example.com - s3: - # Values for S3 bucket access - # Replace with AWS region where S3 bucket was created - # Replace and with your OpenShift cluster values - # bucketSource: "https://s3..amazonaws.com/" - bucketSource: PROVIDE_BUCKET_SOURCE #"https://s3.us-east-2.amazonaws.com/com.validated-patterns.xray-source" - # Bucket base name used for xray images - bucketBaseName: "xray-source" + s3: + # Values for S3 bucket access + # Replace with AWS region where S3 bucket was created + # Replace and with your OpenShift cluster values + # bucketSource: "https://s3..amazonaws.com/" + bucketSource: PROVIDE_BUCKET_SOURCE #validated-patterns-md-xray + # Bucket base name used for xray images + bucketBaseName: "xray-source" ---- - -[,sh] ++ +[source,terminal] ---- - git add values-global.yaml - git commit values-global.yaml - git push origin my-branch +$ git add values-global.yaml +$ git commit values-global.yaml +$ git push origin my-branch ---- -. You can deploy the pattern using the link:/infrastructure/using-validated-pattern-operator/[validated pattern operator]. If you do use the operator then skip to Validating the Environment below. -. Preview the changes that will be made to the Helm charts. +. To deploy the pattern, you can use the link:/infrastructure/using-validated-pattern-operator/[{validated-patterns-op}]. If you do use the Operator, skip to <>. + +. To preview the changes that will be implemented to the Helm charts, run the following command: + -[,sh] +[source,terminal] ---- -./pattern.sh make show +$ ./pattern.sh make show ---- -. Login to your cluster using oc login or exporting the KUBECONFIG +. Login to your cluster by running the following command: + -[,sh] +[source,terminal] ---- -oc login +$ oc login ---- + -.or set KUBECONFIG to the path to your `kubeconfig` file. For example +Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path: + -[,sh] +[source,terminal] ---- -export KUBECONFIG=~/my-ocp-env/auth/kubeconfig + export KUBECONFIG=~/ ---- -[id="check-the-values-files-before-deployment-getting-started"] +[id="check-the-values-files-before-deployment"] == Check the values files before deployment -You can run a check before deployment to make sure that you have the required variables to deploy the -Medical Diagnosis Validated Pattern. +To ensure that you have the required variables to deploy the {med-pattern}, run the `./pattern.sh make predeploy` command. You can review your values and make updates, if required. -You can run `make predeploy` to check your values. This will allow you to review your values and changed them in -the case there are typos or old values. The values files that should be reviewed prior to deploying the -Medical Diagnosis Validated Pattern are: +You must review the following values files before deploying the {med-pattern}: |=== | Values File | Description | values-secret.yaml -| This is the values file that will include the xraylab section with all the database secrets +| Values file that includes the secret parameters required by the pattern | values-global.yaml -| File that is used to contain all the global values used by Helm +| File that contains all the global values used by Helm to deploy the pattern |=== -Make sure you have the correct domain, clustername, externalUrl, targetBucket and bucketSource values. - -image::/videos/predeploy.svg[link="/videos/predeploy.svg"] - +[NOTE] +==== +Before you run the `./pattern.msh make install` command, ensure that you have the correct values for: +``` +- domain +- clusterName +- cloudProvider +- storageClassName +- region +- bucketSource +``` +==== + +//image::/videos/predeploy.svg[link="/videos/predeploy.svg"] + +//Module to be included +//:_content-type: PROCEDURE +//:imagesdir: ../../../images +[id="med-deploy-pattern_{context}"] = Deploy -. Apply the changes to your cluster +. To apply the changes to your cluster, run the following command: + -[,sh] +[source,terminal] ---- -./pattern.sh make install +$ ./pattern.sh make install ---- + -If the install fails and you go back over the instructions and see what was missed and change it, then run `make update` to continue the installation. - -. This takes some time. Especially for the OpenShift Data Foundation operator components to install and synchronize. The `make install` provides some progress updates during the install. It can take up to twenty minutes. Compare your `make install` run progress with the following video showing a successful install. +If the installation fails, you can go over the instructions and make updates, if required. +To continue the installation, run the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make update +---- ++ +This step might take some time, especially for the {ocp-data-short} Operator components to install and synchronize. The `./pattern.sh make install` command provides some progress updates during the installation process. It can take up to twenty minutes. Compare your `./pattern.sh make install` run progress with the following video that shows a successful installation. + image::/videos/xray-deployment.svg[link="/videos/xray-deployment.svg"] -. Check that the operators have been installed in the UI. -.. To verify, in the OpenShift Container Platform web console, navigate to *Operators → Installed Operators* page. - .. Check that the Operator is installed in the `openshift-operators` namespace and its status is `Succeeded`. -+ -The main operator to watch is the OpenShift Data Foundation. +. Verify that the Operators have been installed. +.. To verify, in the {ocp} web console, navigate to *Operators* → *Installed Operators* page. +.. Check that the Operator is installed in the `openshift-operators` namespace and its status is `Succeeded`. Ensure that {ocp-data-short} is listed in the list of installed Operators. + +//Module to be included +//:_content-type: PROCEDURE +//:imagesdir: ../../../images [id="using-openshift-gitops-to-check-on-application-progress-getting-started"] == Using OpenShift GitOps to check on Application progress -You can also check on the progress using OpenShift GitOps to check on the various applications deployed. +To check the various applications that are being deployed, you can view the progress of the {rh-gitops-short} Operator. . Obtain the ArgoCD URLs and passwords. + -The URLs and login credentials for ArgoCD change depending on the pattern -name and the site names they control. Follow the instructions below to find -them, however you choose to deploy the pattern. +The URLs and login credentials for ArgoCD change depending on the pattern name and the site names they control. Follow the instructions below to find them, however you choose to deploy the pattern. + Display the fully qualified domain names, and matching login credentials, for all ArgoCD instances: + -[,sh] +[source,terminal] ---- ARGO_CMD=`oc get secrets -A -o jsonpath='{range .items[*]}{"oc get -n "}{.metadata.namespace}{" routes; oc -n "}{.metadata.namespace}{" extract secrets/"}{.metadata.name}{" --to=-\\n"}{end}' | grep gitops-cluster` CMD=`echo $ARGO_CMD | sed 's|- oc|-;oc|g'` eval $CMD ---- + -The result should look something like: +.Example output + -[,text] +[source,text] ---- NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD hub-gitops-server hub-gitops-server-medical-diagnosis-hub.apps.wh-medctr.blueprints.rhecoeng.com hub-gitops-server https passthrough/Redirect None @@ -245,22 +308,29 @@ openshift-gitops-server openshift-gitops-server-openshift-gitops.apps.wh-medct FdGgWHsBYkeqOczE3PuRpU1jLn7C2fD6 ---- + -The most important ArgoCD instance to examine at this point is `medical-diagnosis-hub`. This is where all the applications for the pattern can be tracked. +[IMPORTANT] +==== +Examine the `medical-diagnosis-hub` ArgoCD instance. You can track all the applications for the pattern in this instance. +==== -. Check all applications are synchronised. There are thirteen different ArgoCD "applications" deployed as part of this pattern. +. Check that all applications are synchronized. There are thirteen different ArgoCD `applications` that are deployed as part of this pattern. + +//Module to be included +//:_content-type: PROCEDURE +//:imagesdir: ../../../images [id="viewing-the-grafana-based-dashboard-getting-started"] == Viewing the Grafana based dashboard -. First we need to accept SSL certificates on the browser for the dashboard. In the OpenShift console go to the Routes for project openshift-storage. Click on the URL for the s3-rgw. +. Accept the SSL certificates on the browser for the dashboard. In the {ocp} web console, go to the Routes for project `openshift-storage``. Click the URL for the `s3-rgw`. + image::medical-edge/storage-route.png[link="/images/medical-edge/storage-route.png"] + -Make sure that you see some XML and not an access denied message. +Ensure that you see some XML and not the access denied error message. + image::medical-edge/storage-rgw-route.png[link="/images/medical-edge/storage-rgw-route.png"] -. While still looking at Routes, change the project to `xraylab-1`. Click on the URL for the `image-server`. Make sure you do not see an access denied message. You ought to see a `Hello World` message. +. While still looking at Routes, change the project to `xraylab-1`. Click the URL for the `image-server`. Ensure that you do not see an access denied error message. You must to see a `Hello World` message. + image::medical-edge/grafana-routes.png[link="/images/medical-edge/grafana-routes.png"] @@ -268,16 +338,16 @@ image::medical-edge/grafana-routes.png[link="/images/medical-edge/grafana-routes + You can go to the command-line (make sure you have KUBECONFIG set, or are logged into the cluster. + -[,sh] +[source,terminal] ---- -oc scale deploymentconfig/image-generator --replicas=1 -n xraylab-1 +$ oc scale deploymentconfig/image-generator --replicas=1 -n xraylab-1 ---- + Or you can go to the OpenShift UI and change the view from Administrator to Developer and select Topology. From there select the `xraylab-1` project. + image::medical-edge/dev-topology.png[link="/images/medical-edge/dev-topology.png"] + -Right click on the `image-generator` pod icon and select `Edit Pod count`. +Right-click on the `image-generator` pod icon and select `Edit Pod count`. + image::medical-edge/dev-topology-menu.png[link="/images/medical-edge/dev-topology-menu.png"] + @@ -287,10 +357,15 @@ image::medical-edge/dev-topology-pod-count.png[link="/images/medical-edge/dev-to + Alternatively, you can have the same outcome on the Administrator console. + -Go to the OpenShift UI under Workloads, select Deploymentconfigs for Project xraylab-1. Click on `image-generator` and increase the pod count to 1. +Go to the OpenShift UI under Workloads, select Deploymentconfigs for Project `xraylab-1`. +Click `image-generator` and increase the pod count to 1. + image::medical-edge/start-image-flow.png[link="/images/medical-edge/start-image-flow.png"] + +//Module to be included +//:_content-type: PROCEDURE +//:imagesdir: ../../../images [id="making-some-changes-on-the-dashboard-getting-started"] == Making some changes on the dashboard @@ -298,30 +373,25 @@ You can change some of the parameters and watch how the changes effect the dashb . You can increase or decrease the number of image generators. + -[,sh] +[source,terminal] ---- -oc scale deploymentconfig/image-generator --replicas=2 +$ oc scale deploymentconfig/image-generator --replicas=2 ---- + Check the dashboard. + -[,sh] +[source,terminal] ---- -oc scale deploymentconfig/image-generator --replicas=0 +$ oc scale deploymentconfig/image-generator --replicas=0 ---- + Watch the dashboard stop processing images. . You can also simulate the change of the AI model version - as it's only an environment variable in the Serverless Service configuration. + -[,sh] +[source,terminal] ---- -oc patch service.serving.knative.dev/risk-assessment --type=json -p '[{"op":"replace","path":"/spec/template/metadata/annotations/revisionTimestamp","value":"'"$(date +%F_%T)"'"},{"op":"replace","path":"/spec/template/spec/containers/0/env/0/value","value":"v2"}]' +$ oc patch service.serving.knative.dev/risk-assessment --type=json -p '[{"op":"replace","path":"/spec/template/metadata/annotations/revisionTimestamp","value":"'"$(date +%F_%T)"'"},{"op":"replace","path":"/spec/template/spec/containers/0/env/0/value","value":"v2"}]' ---- + -This changes the model version value, as well as the revisionTimestamp in the annotations, which triggers a redeployment of the service. - -= Next Steps - -link:https://groups.google.com/g/hybrid-cloud-patterns[Help & Feedback] -link:https://github.com/hybrid-cloud-patterns/medical-diagnosis/issues[Report Bugs] +This changes the model version value, and the `revisionTimestamp` in the annotations, which triggers a redeployment of the service. diff --git a/content/patterns/medical-diagnosis/ideas-for-customization.adoc b/content/patterns/medical-diagnosis/ideas-for-customization.adoc index 091a48aad..5dea450e1 100644 --- a/content/patterns/medical-diagnosis/ideas-for-customization.adoc +++ b/content/patterns/medical-diagnosis/ideas-for-customization.adoc @@ -1,28 +1,32 @@ --- -title: Ideas for Customization +title: Ideas for customization weight: 50 aliases: /medical-diagnosis/ideas-for-customization/ --- :toc: :imagesdir: /images :_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] -= Why change it? +//Module to be included +//:_content-type: CONCEPT +//:imagesdir: ../../images -One of the major goals of the Red Hat patterns development process is to create modular, customizable demos. The medical diagnosis pattern is just an example of how AI/ML workloads built for object detection and classification can be run on top of OpenShift clusters. Consider your workloads for a moment - how would your workload best consume the pattern framework? Do your consumers require on-demand or near real time responses when using your application? Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA? The medical diagnosis pattern has the ability to answer the call to either of these requirements via OpenShift Serverless and OpenShift Data Foundations. +[id="about-customizing-pattern-med"] += About customizing the pattern {med-pattern} -[id="what-are-some-different-ways-that-i-could-use-this-pattern?"] -== What are some different ways that I could use this pattern? +One of the major goals of the {solution-name-upstream} development process is to create modular and customizable demos. The {med-pattern} is just an example of how AI/ML workloads built for object detection and classification can be run on OpenShift clusters. Consider your workloads for a moment - how would your workload best consume the pattern framework? Do your consumers require on-demand or near real-time responses when using your application? Is your application processing images or data that is protected by either Government Privacy Laws or HIPAA? +The {med-pattern} can answer the call to either of these requirements by using {serverless-short} and {ocp-data-short}. -. The medical-diagnosis pattern is scanning X-Ray images to determine the probability that a patient may or may not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that utilize object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such Sepsis, Cancer or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease as well as bowel disorders like Crohn's disease. -. The Transportation Security Agency (TSA) could use the medical-diagnosis pattern in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With MLOps the model is constantly training and learning to better detect those items that are dangerous but aren't necessarily metallic such as a firearm or knife. The model is also training to dismiss those items that are authorized ultimately saving us from being stopped and searched at security checkpoints! -. Militaries could use images collected from drones, satellites or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin and other identifying characteristics. -. Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area. +[id="understanding-different-ways-to-use-med-pattern"] +== Understanding different ways to use the {med-pattern} -[id="summary-ideas-for-customization"] -== Summary +. The {med-pattern} is scanning X-Ray images to determine the probability that a patient might or might not have Pneumonia. Continuing with the medical path, the pattern could be used for other early detection scenarios that use object detection and classification. For example, the pattern could be used to scan C/T images for anomalies in the body such as Sepsis, Cancer, or even benign tumors. Additionally, the pattern could be used for detecting blood clots, some heart disease, and bowel disorders like Crohn's disease. +. The Transportation Security Agency (TSA) could use the {med-pattern} in a way that enhances their existing scanning capabilities to detect with a higher probability restricted items carried on a person or hidden away in a piece of luggage. With Machine Learning Operations (MLOps), the model is constantly training and learning to better detect those items that are dangerous but which are not necessarily metallic, such as a firearm or a knife. The model is also training to dismiss those items that are authorized; ultimately saving passengers from being stopped and searched at security checkpoints. +. Militaries could use images collected from drones, satellites, or other platforms to identify objects and determine with probability what that object is. For example, the model could be trained to determine a type of ship, potentially its country of origin, and other such identifying characteristics. +. Manufacturing companies could use the pattern to inspect finished products as they roll off a production line. An image of the item, including using different types of light, could be analyzed to help expose defects before packaging and distributing. The item could be routed to a defect area. -These are just a few ideas to help get the creative juices flowing for how you could use the medical-diagnosis pattern as a framework for your application. +These are just a few ideas to help you understand how you could use the {med-pattern} as a framework for your application. -https://groups.google.com/g/hybrid-cloud-patterns[Help & Feedback] -https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues[Report Bugs] +//We have relevant links on the patterns page +//AI: Why does this point to AEG though? https://github.com/hybrid-cloud-patterns/ansible-edge-gitops/issues[Report Bugs] diff --git a/content/patterns/medical-diagnosis/troubleshooting.adoc b/content/patterns/medical-diagnosis/troubleshooting.adoc index 11a0e285b..a7b59e0c4 100644 --- a/content/patterns/medical-diagnosis/troubleshooting.adoc +++ b/content/patterns/medical-diagnosis/troubleshooting.adoc @@ -1,120 +1,114 @@ --- -title: Troubleshooting Guide +title: Troubleshooting weight: 40 aliases: /medical-diagnosis/troubleshooting/ --- :toc: :imagesdir: /images -:_content-type: ASSEMBLY +:_content-type: REFERENCE +include::modules/comm-attributes.adoc[] -[discrete] -[id="contents-troubleshooting"] -== Contents - -[id="understanding-the-makefile-troubleshooting"] +[id="med-understanding-the-makefile-troubleshooting"] === Understanding the Makefile The Makefile is the entrypoint for the pattern. We use the Makefile to bootstrap the pattern to the cluster. After the initial bootstrapping of the pattern, the Makefile isn't required for ongoing operations but can often be useful when needing to make a change to a config within the pattern by running a `make upgrade` which allows us to refresh the bootstrap resources without having to tear down the pattern or cluster. -[id="make-install--make-deploy-troubleshooting"] -==== make install / make deploy - -Executing `make install` within the pattern application will trigger a `make deploy` from `/common`. This initializes the *common* components of the pattern framework and will install a helm chart in the `default` namespace. At this point cluster services such as *Red Hat Advanced Cluster Management* and *OpenShift Gitops* are deployed. +[id="about-make-install-make-deploy-troubleshooting"] +==== About the make install and make deploy commands -Once *common* completes, the remaining tasks within the `make install` target will execute. +Running `make install` within the pattern application triggers a `make deploy` from `/common` directory. This initializes the `common` components of the pattern framework and install a helm chart in the `default` namespace. At this point, cluster services, such as {rh-rhacm-first} and {rh-gitops} are deployed. -[id="make-vault-init--make-load-secrets-troubleshooting"] -==== make vault-init / make load-secrets +After components from the `common` directory are installed, the remaining tasks within the `make install` target run. +//AI: Check which are these other tasks -This pattern is integrated with *HashiCorp Vault* and *External Secrets* services for secrets management within the cluster. These targets install vault from a Helm chart and the load the secret `(values-secret.yaml)` you created during link:../getting-started/#preparation[Getting Started]. +[id="make-vault-init-make-load-secrets-troubleshooting"] +==== About the make vault-init and make load-secrets commands -If *values-secret.yaml* does not exist, make will exit with an error saying so. Furthermore, if the *values-secret.yaml* file does exist but is improperly formatted, ansible will exit with an error about being improperly formatted. If you are not sure how format the secret, please refer to link:../getting-started/#preparation[Getting Started]. +The {med-pattern} is integrated with {hashicorp-vault} and {eso-op} services for secrets management within the cluster. These targets install vault from a {helm-chart} and load the secret `(values-secret.yaml)` that you created during link:../getting-started/#preparing-for-deployment[Getting Started]. -[id="make-bootstrap--make-upgrade-troubleshooting"] -==== make bootstrap / make upgrade +If `values-secret.yaml` does not exist, make will exit with an error saying so. Furthermore, if the `values-secret.yaml` file does exist but is improperly formatted, {rh-ansible} exits with an error about being improperly formatted. To verify the format of the secret, see link:../getting-started/#preparing-for-deployment[Getting Started]. -`make bootstrap` is the target used for deploying the application specific components of the pattern. It is the final step in the initial `make install` target. Running `make bootstrap` directly should typically not be necessary, instead you are encouraged to run `make upgrade`. +[id="make-bootstrap-make-upgrade-troubleshooting"] +==== About the make bootstrap and make upgrade commands +The `make bootstrap` command is the target used for deploying the application specific components of the pattern. It is the final step in the initial `make install` target. You might want to consider running the `make upgrade` command instead of the `make bootstrap` command directly. -Generally, executing `make upgrade` should only be required when something goes wrong with the application pattern deployment. For instance, if a value was missed, and the chart wasn't rendered correctly, executing `make upgrade` after fixing the value would be necessary. +Generally, running the `make upgrade` command is required when you encounter errors with the application pattern deployment. For instance, if a value was missed and the chart was not rendered correctly, executing `make upgrade` command after fixing the value would be necessary. -If you have any further questions, please, feel free to review the `Makefile` for the *common* and *Medical Diagnosis* components. It is located in `common/Makefile` and `./Makefile` respectively. +You might want to review the `Makefile` for the `common` and `Medical Diagnosis` components, which are located in `common/Makefile` and `./Makefile` respectively. [id="troubleshooting-the-pattern-deployment-troubleshooting"] === Troubleshooting the Pattern Deployment -Occasionally the pattern will encounter issues during the deployment. This can happen for any number of reasons, but most often it is because of either a change within the operator itself or something has changed/happened to the Operator Lifecycle Manager (OLM) which determines which operators are available in the operator catalog. Generally, when an issue occurs with the OLM, the operator is unavailable for installation. To ensure that the operator is in the catalog: +Occasionally the pattern will encounter issues during the deployment. This can happen for any number of reasons, but most often it is because of either a change within the operator itself or something has changed in the {olm-first} which determines which operators are available in the operator catalog. Generally, when an issue occurs with the {olm-short}, the operator is unavailable for installation. To ensure that the operator is in the catalog, run the following command: -[,sh] +[source,terminal] ---- - -oc get packagemanifests | grep +$ oc get packagemanifests | grep ---- When an issue occurs with the operator itself you can verify the status of the `subscription` and make sure that there are no warnings.An additional option is to log into the OpenShift Console, click on Operators, and check the status of the operator. Other issues encounter could be with a specific application within the pattern misbehaving. Most of the pattern is deployed into the `xraylab-1` namespace. Other components like ODF are deployed into `openshift-storage` and the OpenShift Serverless Operators are deployed into `knative-serving, knative-eventing` namespaces. -____ -*Use the grafana dashboard to assist with debugging and identifying the issue* -____ +[NOTE] +==== +Use the grafana dashboard to assist with debugging and identifying the issue +==== ''' +Problem:: No information is being processed in the dashboard -*Problem*: No information is being processed in the dashboard - -*Solution*: Most often this is due to the image-generator deploymentConfig needing to be scaled up. The image-generator by design is *scaled to 0*; - -[,sh] +Solution:: Most often this is due to the image-generator deploymentConfig needing to be scaled up. The image-generator by design is *scaled to 0*; ++ +[source,terminal] ---- -oc scale -n xraylab-1 dc/image-generator --replicas=1 +$ oc scale -n xraylab-1 dc/image-generator --replicas=1 ---- ++ +Alternatively, complete the following steps: -Or open the openshift-console, click on workloads, then click deploymentConfigs, click image-generator, and scale the pod to 1 or more. +. Navigate to the {rh-ocp} web console, and select *Workloads → DeploymentConfigs* +. Select `image-generator` and scale the pod to 1 or more. +//AI: Needs review ''' +Problem:: When browsing to the *xraylab* grafana dashboard and there are no images in the right-pane, only a security warning. -*Problem*: When browsing to the *xraylab* grafana dashboard and there are no images in the right-pane, only a security warning. - -*Solution*: The certificates for the openshift cluster are untrusted by your system. The easiest way to solve this is to open a browser and go to the s3-rgw route (oc get route -n openshift-storage), then acknowledge and accept the security warning. +Solution:: The certificates for the openshift cluster are untrusted by your system. The easiest way to solve this is to open a browser and go to the s3-rgw route (oc get route -n openshift-storage), then acknowledge and accept the security warning. ''' +Problem:: In the dashboard interface, no metrics data is available. -*Problem*: In the dashboard interface, no metrics data is available. - -*Solution*: There is likely something wrong with the Prometheus DataSource for the grafana dashboard. You can check the status of the datasource by executing the following: - -[,sh] +Solution:: There is likely something wrong with the Prometheus Data Source for the grafana dashboard. You can check the status of the data source by executing the following: ++ +[source,terminal] ---- -oc get grafanadatasources -n xraylab-1 +$ oc get grafanadatasources -n xraylab-1 ---- - -Ensure that the prometheus datasource exists and that the status is available. This could potentially be the token from the service account (*grafana-serviceaccount*) that is provided to the datasource as a bearer token. ++ +Ensure that the Prometheus data source exists and that the status is available. This could potentially be the token from the service account, for example, grafana-serviceaccount, that is provided to the data source as a bearer token. ''' - -*Problem*: The dashboard is showing red in the corners of the dashboard panes. - +Problem:: The dashboard is showing red in the corners of the dashboard panes. ++ image::medical-edge/medDiag-noDB.png[link="/images/medical-edge/medDiag-noDB.png"] -*Solution*: This is most likely due to the *xraylab* database not being available or misconfigured. Please check the database and ensure that it is functioning properly. - -*Step 1*: Ensure that the database is populated with the correct tables: +Solution:: This is most likely due to the *xraylab* database not being available or misconfigured. Please check the database and ensure that it is functioning properly. +. Ensure that the database is populated with the correct tables: ++ [source,terminal] ---- - -oc exec -it xraylabdb-1- bash - -mysql -u root +$ oc exec -it xraylabdb-1- bash +$ mysql -u root USE xraylabdb; SHOW tables; ---- - -The expected output is: - ++ +.Example output [source,terminal] ---- @@ -138,49 +132,64 @@ MariaDB [xraylabdb]> show tables; +---------------------+ 3 rows in set (0.000 sec) ---- - -*Step 2:* Verify the password set in the `values-secret.yaml` is working - ++ +. Verify the password set in the `values-secret.yaml` is working ++ [source,terminal] ---- -oc exec -it xraylabdb-1- bash - -mysql -u xraylab -D xraylabdb -h xraylabdb -p - +$ oc exec -it xraylabdb-1- bash +$ mysql -u xraylab -D xraylabdb -h xraylabdb -p + ---- - -If you are able to successfully login then your password has been configured correctly in vault, -the external secrets operator and mounted to the database correctly. ++ +If you are able to successfully login then your password has been configured correctly in vault, the external secrets operator and mounted to the database correctly. ''' +Problem:: The image-generator is scaled correctly, but the dashboard is not updating. -*Problem*: The image-generator is scaled correctly, but nothing is happening in the dashboard. - -*Solution*: This could be that the serverless eventing function isn't picking up the notifications from ODF and therefore, not triggering the knative-serving function to scale up. In this situation there are a number of things to check, the first thing is to check the logs of the `rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-` pod in the `openshift-storage` namespace. - -[,sh] +Solution:: The serverless eventing function might not be able to fetch the notifications from ODF and therefore, not triggering the knative-serving function to scale up. You may want to check the logs of the `rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-` pod in the `openshift-storage` namespace. ++ +[source,terminal] ---- - -oc logs -n openshift-storage -f -c rgw +$ oc logs -n openshift-storage -f -c rgw ---- - -*You should see the `PUT` statement with a status code of `200`* - -Next ensure that the `kafkasource` and `kafkservice` and `kafka topic` resources have been created: - -[,sh] ++ +You should see the `PUT` statement with a status code of `200` ++ +Ensure that the `kafkasource`, `kafkservice`, and `kafka topic` resources are created: ++ +[source,terminal] +---- +$ oc get -n xraylab-1 kafkasource +---- ++ +.Example output +[source,terminal] ---- - -oc get -n xraylab-1 kafkasource - NAME TOPICS BOOTSTRAPSERVERS READY REASON AGE xray-images ["xray-images"] ["xray-cluster-kafka-bootstrap.xraylab-1.svc:9092"] True 23m - -oc get -n xraylab-1 kservice +---- ++ +[source,terminal] +---- +$ oc get -n xraylab-1 kservice +---- ++ +.Example output +[source,terminal] +---- NAME URL LATESTCREATED LATESTREADY READY REASON risk-assessment https://risk-assessment-xraylab-1.apps. risk-assessment-00001 risk-assessment-00001 True - -oc get -n xraylab-1 kafkatopics +---- ++ +[source,terminal] +---- +$ oc get -n xraylab-1 kafkatopics +---- ++ +.Example output +[source,terminal] +---- NAME CLUSTER PARTITIONS REPLICATION FACTOR READY consumer-offsets---84e7a678d08f4bd226872e5cdd4eb527fadc1c6a xray-cluster 50 1 True strimzi-store-topic---effb8e3e057afce1ecf67c3f5d8e4e3ff177fc55 xray-cluster 1 3 True diff --git a/modules/comm-attributes.adoc b/modules/comm-attributes.adoc index 1e114fbe7..17063577f 100644 --- a/modules/comm-attributes.adoc +++ b/modules/comm-attributes.adoc @@ -5,6 +5,7 @@ //:toc-title: //:imagesdir: images //:prewrap!: +:redhat: Red{nbsp}Hat //Solution name //:rh-solution-name: Validated Patterns :solution-name-upstream: Validated Patterns @@ -20,12 +21,18 @@ :med: medical diagnosis :multi-devsec-pattern: Multi-cluster DevSecOps pattern :multi-devsec: multi-cluster DevSecOps +// Associated products +:hashicorp-vault: HashiCorp Vault +:hashicorp-vault-short: Vault +:helm-chart: Helm chart //Operators :validated-patterns-op: Validated Patterns Operator :grafana-op: Grafana Operator :eso-op: External Secrets Operator +:olm-first: Operator Lifecycle Manager (OLM) +:olm-short: OLM //OpenShift -:rh-ocp: Red Hat OpenShift Container Platform +:rh-ocp: Red{nbsp}Hat OpenShift Container Platform :ocp: OpenShift Container Platform :ocp-version: 4.12 :ocp-registry: OpenShift image registry @@ -35,85 +42,86 @@ :3no: three-node OpenShift :3no-first: Three-node OpenShift //OpenShift Platform Plus -:rh-opp: Red Hat OpenShift Platform Plus +:rh-opp: Red{nbsp}Hat OpenShift Platform Plus :opp: OpenShift Platform Plus //openshift virtualization (cnv) :VirtProductName: OpenShift Virtualization //Ansible: -:rh-ansible: Red Hat Ansible Automation Platform +:rh-ansible: Red{nbsp}Hat Ansible Automation Platform :ansible: Ansible Automation Platform //GitOps -:rh-gitops: Red Hat OpenShift GitOps +:rh-gitops: Red{nbsp}Hat OpenShift GitOps :rh-gitops-short: OpenShift GitOps //CoreOS -:op-system-first: Red Hat Enterprise Linux CoreOS (RHCOS) +:op-system-first: Red{nbsp}Hat Enterprise Linux CoreOS (RHCOS) :op-system-short: RHCOS :op-system-lowercase: rhcos //RHEL -:rhel-first: Red Hat Enterprise Linux (RHEL) +:rhel-first: Red{nbsp}Hat Enterprise Linux (RHEL) :rhel-short: RHEL //:op-system-version: 8.x //Console icons -:rh-app-icon: image:red-hat-applications-menu-icon.jpg[title="Red Hat applications"] +:rh-app-icon: image:red-hat-applications-menu-icon.jpg[title="Red{nbsp}Hat applications"] :kebab: image:kebab.png[title="Options menu"] -:rh-openstack-first: Red Hat OpenStack Platform (RHOSP) +:rh-openstack-first: Red{nbsp}Hat OpenStack Platform (RHOSP) :openstack-short: RHOSP //Assisted Installer :ai-full: Assisted Installer :ai-version: 2.3 //Cluster Manager -:cluster-manager-first: Red Hat OpenShift Cluster Manager +:cluster-manager-first: Red{nbsp}Hat OpenShift Cluster Manager :cluster-manager: OpenShift Cluster Manager :cluster-manager-url: link:https://console.redhat.com/openshift[OpenShift Cluster Manager Hybrid Cloud Console] :cluster-manager-url-pull: link:https://console.redhat.com/openshift/install/pull-secret[pull secret from the Red Hat OpenShift Cluster Manager] :insights-advisor-url: link:https://console.redhat.com/openshift/insights/advisor/[Insights Advisor] -:hybrid-console-first: Red Hat Hybrid Cloud Console +:hybrid-console-first: Red{nbsp}Hat Hybrid Cloud Console :hybrid-console: Hybrid Cloud Console :hybrid-console-url: https://console.redhat.com/ :hybrid-console-ocp-url: https://console.redhat.com/openshift //ACM -:rh-rhacm-first: Red Hat Advanced Cluster Management (RHACM) +:rh-rhacm-first: Red{nbsp}Hat Advanced Cluster Management (RHACM) :rh-rhacm: RHACM //:rh-rhacm-version: 2.5 //ACS -:rh-acs-first: Red Hat Advanced Cluster Security for Kubernetes +:rh-acs-first: Red{nbsp}Hat Advanced Cluster Security for Kubernetes -:cert-manager-operator: cert-manager Operator for Red Hat OpenShift -:secondary-scheduler-operator-full: Secondary Scheduler Operator for Red Hat OpenShift +:cert-manager-operator: cert-manager Operator for Red{nbsp}Hat OpenShift +:secondary-scheduler-operator-full: Secondary Scheduler Operator for Red{nbsp}Hat OpenShift :secondary-scheduler-operator: Secondary Scheduler Operator -:rh-virtualization-first: Red Hat Virtualization (RHV) +:rh-virtualization-first: Red{nbsp}Hat Virtualization (RHV) :rh-virtualization: RHV :rh-virtualization-engine-name: Manager //gitops -:gitops-title: Red Hat OpenShift GitOps +:gitops-title: Red{nbsp}Hat OpenShift GitOps :gitops-shortname: GitOps :gitops-ver: 1.1 //pipelines -:rh-pipelines-first: Red Hat OpenShift Pipelines +:rh-pipelines-first: Red{nbsp}Hat OpenShift Pipelines :pipelines-short: OpenShift Pipelines :pipelines-ver: pipelines-1.9 :tekton-chains: Tekton Chains :tekton-hub: Tekton Hub :pac: Pipelines as Code //AMQ -:rh-amq-first: Red Hat AMQ +:rh-amq-first: Red{nbsp}Hat AMQ +:rh-amq-streams: Red{nbsp}Hat AMQ Streams :rh-amq-short: RHAMQ //Runtimes -:rh-runtime: Red Hat Runtimes +:rh-runtime: Red{nbsp}Hat Runtimes //Data foundation -:rh-ocp-data-first: Red Hat OpenShift Data Foundation -:rh-ocp-data-short: OpenShift Data Foundation +:rh-ocp-data-first: Red{nbsp}Hat OpenShift Data Foundation +:ocp-data-short: OpenShift Data Foundation //Serverless -:rh-serverless-first: Red Hat OpenShift Serverless +:rh-serverless-first: Red{nbsp}Hat OpenShift Serverless :serverless-short: OpenShift Serverless :ServerlessOperatorName: OpenShift Serverless Operator :FunctionsProductName: OpenShift Serverless Functions //Quay -:rh-quay: Red Hat Quay +:rh-quay: Red{nbsp}Hat Quay :quay-short: Quay // Red Hat Quay Container Security Operator -:rhq-cso: Red Hat Quay Container Security Operator +:rhq-cso: Red{nbsp}Hat Quay Container Security Operator //Cloud platforms :AWS: Amazon Web Services (AWS) :GCP: Google Cloud Platform (GCP) -:Azure: Microsoft Azure \ No newline at end of file +:Azure: Microsoft Azure diff --git a/modules/mcg-about-cluster-sizing.adoc b/modules/mcg-about-cluster-sizing.adoc index ca6e194fa..4a917879c 100644 --- a/modules/mcg-about-cluster-sizing.adoc +++ b/modules/mcg-about-cluster-sizing.adoc @@ -5,6 +5,8 @@ [id="about-openshift-cluster-sizing-mcg"] = About OpenShift cluster sizing for the Multicloud GitOps Pattern +For information on minimum hardware requirements for an OpenShift cluster, refer to the {ocp} documentation for link:https://docs.openshift.com/container-platform/3.11/install/prerequisites.html#hardware[System and environment requirements]. + To understand cluster sizing requirements for the Multicloud GitOps pattern, consider the following components that the Multicloud GitOps pattern deploys on the datacenter or the hub OpenShift cluster: |=== @@ -26,11 +28,4 @@ To understand cluster sizing requirements for the Multicloud GitOps pattern, con | OpenShift GitOps |=== -The following are the minimum requirements for sizing of nodes for OpenShift Container Platform 4.x: - -* Minimum 4 vCPU** (additional are strongly recommended) -* Minimum 16 GB RAM** (additional memory is strongly recommended, especially if `etcd` is colocated on Control Planes) -* Minimum 40 GB hard disk space for the file system containing `/var/` -* Minimum 1 GB hard disk space for the file system containing `/usr/local/bin/` - The Multicloud GitOps pattern also includes the Red Hat Advanced Cluster Management (RHACM) supporting operator that is installed by OpenShift GitOps using Argo CD. \ No newline at end of file diff --git a/modules/mcg-about-customizing-pattern.adoc b/modules/mcg-about-customizing-pattern.adoc index 520b0bb12..f145e7f54 100644 --- a/modules/mcg-about-customizing-pattern.adoc +++ b/modules/mcg-about-customizing-pattern.adoc @@ -1,8 +1,8 @@ :_content-type: CONCEPT :imagesdir: ../../images -[id="about-customizing-pattern"] -= About customizing a pattern +[id="about-customizing-pattern-mcg"] += About customizing the pattern {mcg-pattern} One of the major goals of the Validated Patterns development process is to create modular, customizable demos. The Multicloud Gitops is just an example of a pattern managing multiple clusters in a GitOps fashion. It contains a very simple `config-demo` application, which prints out a secret that was injected into the vault through an out-of-band mechanism. diff --git a/modules/mcg-deploying-mcg-pattern.adoc b/modules/mcg-deploying-mcg-pattern.adoc index 7d303e024..e6c8ee8c7 100644 --- a/modules/mcg-deploying-mcg-pattern.adoc +++ b/modules/mcg-deploying-mcg-pattern.adoc @@ -4,7 +4,6 @@ [id="deploying-mcg-pattern"] = Deploying the Multicloud GitOps pattern -//Note that Block titles like these don't render correctly on the site .Prerequisites * An OpenShift cluster @@ -12,8 +11,8 @@ ** Select *OpenShift \-> Clusters \-> Create cluster*. ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. See link:../../multicloud-gitops/mcg-cluster-sizing[sizing your cluster]. * Optional: A second OpenShift cluster for multicloud demonstration. -* The git binary. See https://git-scm.com/book/en/v2/Getting-Started-Installing-Git[Installing Git] -* The Podman tool. See https://podman.io/getting-started/installation[Installing Podman] +//Replaced git and podman prereqs with the tooling dependencies page +* https://hybrid-cloud-patterns.io/learn/quickstart/[Install the tooling dependencies]. The use of this pattern depends on having at least one running Red Hat OpenShift cluster. However, consider creating a cluster for deploying the GitOps management hub assets and a separate cluster for the managed cluster.