Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Hybrid Cloud Patterns documentation site
# Validated Patterns documentation site

This project contains the new proof-of-concept documentation site for validatedpatterns.io

Expand Down
2 changes: 1 addition & 1 deletion content/blog/2021-12-31-medical-diagnosis.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ aliases: /2021/12/31/medical-diagnosis/

Our team recently completed the development of a validated pattern that showcases the capabilities we have at our fingertips when we combine OpenShift and other cutting edge Red Hat technologies to deliver a solution.

We've taken an application defined imperatively in an Ansible playbook and converted it into GitOps style declarative kubernetes resources. Using the validated pattern framework we are able to deploy, manage and integrate with multiple cutting edge Red Hat technologies, and provide a capability that the initial deployment strategy didn't have available to it: a lifecycle. Everything you need to take this pattern for a spin is in [git](https://github.com/hybrid-cloud-patterns/medical-diagnosis).
We've taken an application defined imperatively in an Ansible playbook and converted it into GitOps style declarative kubernetes resources. Using the validated pattern framework we are able to deploy, manage and integrate with multiple cutting edge Red Hat technologies, and provide a capability that the initial deployment strategy didn't have available to it: a lifecycle. Everything you need to take this pattern for a spin is in [git](https://github.com/validatedpatterns/medical-diagnosis).

## Pattern Workflow

Expand Down
4 changes: 2 additions & 2 deletions content/blog/2022-03-30-multicloud-gitops.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ aliases: /2022/03/30/multicloud-gitops/

# Validated Pattern: Multi-Cloud GitOps

## Hybrid Cloud Patterns: The Story so far
## Validated Patterns: The Story so far

Our first foray into the realm of Hybrid Cloud Patterns was the adaptation of the MANUela application and its associated tooling to ArgoCD and Tekton, to demonstrate the deployment of a fairly involved IoT application designed to monitor industrial equipment and use AI/ML techniques to predict failure. This resulted in the Industrial Edge validated pattern, which you can see [here](https://github.com/hybrid-cloud-patterns/industrial-edge).
Our first foray into the realm of Validated Patterns was the adaptation of the MANUela application and its associated tooling to ArgoCD and Tekton, to demonstrate the deployment of a fairly involved IoT application designed to monitor industrial equipment and use AI/ML techniques to predict failure. This resulted in the Industrial Edge validated pattern, which you can see [here](https://github.com/validatedpatterns/industrial-edge).

This was our first use of a framework to deploy a significant application, and we learned a lot by doing it. It was good to be faced with a number of problems in the “real world” before taking a look at what is really essential for the framework and why.

Expand Down
8 changes: 4 additions & 4 deletions content/blog/2022-09-02-route.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ kind: Route
metadata:
name: hello-openshift
spec:
host: hello-openshift-hello-openshift.<Ingress_Domain>
host: hello-openshift-hello-openshift.<Ingress_Domain>
port:
targetPort: 8080
to:
Expand Down Expand Up @@ -75,7 +75,7 @@ metadata:
name: hello-openshift
namespace: hello-openshift
spec:
subdomain: hello-openshift-hello-openshift
subdomain: hello-openshift-hello-openshift
port:
targetPort: 8080
to:
Expand All @@ -101,7 +101,7 @@ Now using project "hello-openshift" on server "https://api.magic-mirror-2.bluepr
Last but not least now let's apply that example route definition we just created.

```console
$ oc create -f /tmp/route-example.yaml
$ oc create -f /tmp/route-example.yaml
route.route.openshift.io/hello-openshift created
```

Expand Down Expand Up @@ -148,4 +148,4 @@ As you can see the *subdomain* property was replaced with the *host* property bu

Using the *subdomain* property when defining route is super useful if you are deploying your application to different clusters and it will allow you to not have to hard code the ingress domain for every cluster.

If you have any questions or want to see what we are working on please feel free to visit our [Hybrid Cloud Patterns](https://validatedpatterns.io/) site. If you are excited or intrigued by what you see here we’d love to hear your thoughts and ideas! Try the patterns contained in our [Hybrid Cloud Patterns Repo](https://github.com/hybrid-cloud-patterns). We will review your pull requests to our pattern repositories.
If you have any questions or want to see what we are working on please feel free to visit our [Validated Patterns](https://validatedpatterns.io/) site. If you are excited or intrigued by what you see here we’d love to hear your thoughts and ideas! Try the patterns contained in our [Validated Patterns Repo](https://github.com/validatedpatterns). We will review your pull requests to our pattern repositories.
4 changes: 2 additions & 2 deletions content/blog/2022-10-12-acm-provisioning.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ the pay-as-you-go OpenShift managed service.

Start by [deploying](https://validatedpatterns.io/multicloud-gitops/getting-started/) the Multi-cloud GitOps pattern on AWS.

Next, you'll need to create a fork of the [multicloud-gitops](https://github.com/hybrid-cloud-patterns/multicloud-gitops/)
Next, you'll need to create a fork of the [multicloud-gitops](https://github.com/validatedpatterns/multicloud-gitops/)
repo. Go there in a browser, make sure you’re logged in to GitHub, click the
“Fork” button, and confirm the destination by clicking the big green "Create
fork" button.
Expand All @@ -56,7 +56,7 @@ And finally, click through to the installed operator, and select the `Create
instance` button and fill out the Create a Pattern form. Most of the defaults
are fine, but make sure you update the GitSpec URL to point to your fork of
`multicloud-gitops`, rather than
`https://github.com/hybrid-cloud-patterns/multicloud-gitops`.
`https://github.com/validatedpatterns/multicloud-gitops`.

### Providing your Cloud Credentials

Expand Down
2 changes: 1 addition & 1 deletion content/contribute/contribute-to-docs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ weight: 10

:toc:

//Use the Contributor's guide to learn about ways to contribute to the Hybrid Cloud Patterns, to understand the prerequisites and toolchain required for contribution, and to follow some basic documentation style and structure guidelines.
//Use the Contributor's guide to learn about ways to contribute to the Validated Patterns, to understand the prerequisites and toolchain required for contribution, and to follow some basic documentation style and structure guidelines.

include::modules/contributing.adoc[leveloffset=+1]
include::modules/tools-and-setup.adoc[leveloffset=+1]
Expand Down
24 changes: 12 additions & 12 deletions content/contribute/extending-a-pattern.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,25 +8,25 @@ aliases: /extending-a-pattern/
# Extending an existing pattern

## Introduction to extending a pattern using a fork
Extending an existing pattern refers to adding a new product and/or configuration to an existing pattern. For example a pattern might be a great fit for a solution but requires the addition of an observability tool, e.g. Prometheus, Grafana, or Elastic. Extending an existing pattern is not very difficult. The advantage is that it automates the integration of this extra product into pattern.
Extending an existing pattern refers to adding a new product and/or configuration to an existing pattern. For example a pattern might be a great fit for a solution but requires the addition of an observability tool, e.g. Prometheus, Grafana, or Elastic. Extending an existing pattern is not very difficult. The advantage is that it automates the integration of this extra product into pattern.

Extending usually requires four steps:
1. Adding any required namespace for the product
1. Adding any required namespace for the product
1. Adding a subscription to install and operator
1. Adding one or more ArgoCD applications to manage the post-install configuration of the product
1. Adding the Helm chart needed to implement the post-install configuration identified in step 3.

Sometimes there is no operator in [OperatorHub](https://catalog.redhat.com/software/search?deployed_as=Operator) for the product and it requires installation using a Helm chart.
Sometimes there is no operator in [OperatorHub](https://catalog.redhat.com/software/search?deployed_as=Operator) for the product and it requires installation using a Helm chart.

These additions need to be made to the appropriate `values-<cluster grouping>.yaml` file in the top level pattern directory. If the component is on a hub cluster the file would be `values-hub.yaml`. If it's on a production cluster that would be in `values-production.yaml`. Look at the pattern architecture and decide where you need to add the product.

In the example below AMQ Streams (Kafka) is chosen as a product to add to a pattern.

## Before starting, fork and clone first

1. Visit the github page for the pattern that you wish to extend. E.g. [multicloud-gitops](https://github.com/hybrid-cloud-patterns/multicloud-gitops). Select “Fork” in the top right corner.
1. Visit the github page for the pattern that you wish to extend. E.g. [multicloud-gitops](https://github.com/validatedpatterns/multicloud-gitops). Select “Fork” in the top right corner.

1. On the create a new fork page, you can choose what owner repository you want and the name of the fork. Most times you will fork into your personal repo and leave the name the same. When you have made the appropriate changes press the "Create fork" button.
1. On the create a new fork page, you can choose what owner repository you want and the name of the fork. Most times you will fork into your personal repo and leave the name the same. When you have made the appropriate changes press the "Create fork" button.

1. You will need to clone from the new fork onto you laptop/desktop so that you can do the extension work effectively. So on the new fork’s main page elect the green “Code” button and copy the git repo’s ssh address.

Expand All @@ -39,7 +39,7 @@ In the example below AMQ Streams (Kafka) is chosen as a product to add to a patt
```

## Adding a namespace
The first step is to add a namespace in the `values-<cluster group>.yaml`. Sometimes a specific namespace is expected in other parts of a products configuration. E.g. Red Hat Advanced Cluster Security expects to use the namespace `stackrox`. While you might try using a different namespace you may encounter issues.
The first step is to add a namespace in the `values-<cluster group>.yaml`. Sometimes a specific namespace is expected in other parts of a products configuration. E.g. Red Hat Advanced Cluster Security expects to use the namespace `stackrox`. While you might try using a different namespace you may encounter issues.

In our example we are just going to add the namespace `my-kafka`.

Expand All @@ -48,7 +48,7 @@ In our example we are just going to add the namespace `my-kafka`.
namespaces:
... # other namespaces above my-kafka
- my-kafka
```
```

## Adding a subscription
The next step is to add the subscription information for the Kubernetes Operator. Sometimes this subscription needs to be added to a specific namespace, e.g. `openshift-operators`. Check for any operator namespace requirements. In this example just place it in the newly created `my-kafka` namespace.
Expand All @@ -60,11 +60,11 @@ subscriptions:
amq-streams:
name: amq-streams
namespace: my-kafka
```
```

## Adding the ArgoCd application
The next step is to add the application information. Sometimes you want to group applications in ArgoCD into a project and you can do this by using an existing project grouping or create a new project group. The example below uses an existing `project` called `my-app`.

```yaml
---
applications:
Expand All @@ -73,10 +73,10 @@ applications:
namespace: my-kafka
project: my-app
path: charts/all/kafka
```
```

## Adding the Helm Chart
The `path:` tag in the above kafka application tells ArgoCD where to find the Helm Chart needed to deploy this application. Paths are relative the the top level pattern directory and therefore in my example that is `~/git/multicloud-gitops`.
The `path:` tag in the above kafka application tells ArgoCD where to find the Helm Chart needed to deploy this application. Paths are relative the the top level pattern directory and therefore in my example that is `~/git/multicloud-gitops`.

ArgoCD will continuously monitor for changes to artifacts in that location for updates to apply. Each different site type would have its own `values-` file listing subscriptions and applications.

Expand Down Expand Up @@ -149,7 +149,7 @@ metadata:
# annotations:
# argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true
#
# NOTE if needed you can use argocd sync-wave to delay a manifest
# NOTE if needed you can use argocd sync-wave to delay a manifest
# argocd.argoproj.io/sync-wave: "3"
spec:
entityOperator:
Expand Down
14 changes: 7 additions & 7 deletions content/learn/faq.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -10,20 +10,20 @@ aliases: /faq/
= FAQ

[id="what-is-a-hybrid-cloud-pattern"]
== What is a Hybrid Cloud Pattern?
== What is a Validated Pattern?

Hybrid Cloud Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Hybrid Cloud Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways.
Validated Patterns are collections of applications (in the ArgoCD sense) that demonstrate aspects of hub/edge computing that seem interesting and useful. Validated Patterns will generally have a hub or centralized component, and an edge component. These will interact in different ways.

Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Hybrid Cloud Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use.
Many things have changed in the IT landscape in the last few years - containers and kubernetes have taken the industry by storm, but they introduce many technologies and concepts. It is not always clear how these technologies and concepts play together - and Validated Patterns is our effort to show these technologies working together on non-trivial applications in ways that make sense for real customers and partners to use.

The first Hybrid Cloud Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.
The first Validated Pattern is based on https://github.com/sa-mw-dach/manuela[MANUela], an application developed by Red Hat field associates. This application highlights some interesting aspects of the industrial edge in a cloud-native world - the hub component features pipelines to build the application, a "twin" for testing purposes, a central data lake, an s3 component to gather data from the edge installations (which are factories in this case). The edge component has machine sensors, which are responsible for only gathering data from instrumented line devices and shares them via MQTT messaging. The edge also features Seldon, an AI/ML framework for making predictions, a custom Node.js application to show data in real time, and messaging components supporting both MQTT and Kafka protocols. The local applications use MQTT to retrieve data for display, and the Kafka components move the data to the central hub for storage and analysis.

We are actively developing new Hybrid Cloud Patterns. Watch this space for updates!
We are actively developing new Validated Patterns. Watch this space for updates!

[id="how-are-they-different-from-xyz"]
== How are they different from XYZ?

Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Hybrid Cloud Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration.
Many technology demos can be very minimal - such demos have an important place in the ecosystem to demonstrate the intent of an individual technology. Validated Patterns are meant to demonstrate groups of technologies working together in a cloud native way. And yet, we hope to make these patterns general enough to allow for swapping application components out -- for example, if you want to swap out ActiveMQ for RabbitMQ to support MQTT - or use a different messaging technology altogether, that should be possible. The other components will require reconfiguration.

[id="what-technologies-are-used"]
== What technologies are used?
Expand All @@ -44,7 +44,7 @@ In the future, we expect to further use Red Hat OpenShift, and expand the integr
[id="how-are-they-structured"]
== How are they structured?

Hybrid Cloud Patterns come in parts - we have a https://github.com/hybrid-cloud-patterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/hybrid-cloud-patterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)
Validated Patterns come in parts - we have a https://github.com/validatedpatterns/common[common] repository with logic that will apply to multiple patterns. Layered on top of that is our first pattern - https://github.com/validatedpatterns/industrial-edge[industrial edge]. This layout allows for individual applications within a pattern to be swapped out by pointing to different repositories or branches for those individual components by customizing the values files in the root of the repository to point to different branches or forks or even different repositories entirely. (At present, the repositories all have to be on github.com and accessible with the same token.)

The common repository is primarily concerned with how to deploy the GitOps operator, and to create the namespaces that will be necessary to manage the pattern applications.

Expand Down
Loading