Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/blog/2022-09-02-route.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ As you can see the spec describes the a **host:** or path to the route, the tar

If we focus on the **host:** value you see that we need to provide the Ingress_Domain to the host. You might ask yourself: *why is this a problem?*

If you manage just one cluster, and your application just runs on that cluster, you can just hard code the ingress domain and be on your merry way. But what happens when you are deploying this application to multiple clusters and their domains are different? Whoever is doing the Ops to deploy your application will have to change the Ingress_Domain to match the the cluster domain manually before deploying the application.
If you manage just one cluster, and your application just runs on that cluster, you can just hard code the ingress domain and be on your merry way. But what happens when you are deploying this application to multiple clusters and their domains are different? Whoever is doing the Ops to deploy your application will have to change the Ingress_Domain to match the cluster domain manually before deploying the application.

Let's go a step further and say you are using *GitOps*, and this definition lives in a *git* repository, what happens then? In our humble opinion it becomes a bit more complicated to make sure the ingress domain is set correctly.

Expand Down
2 changes: 1 addition & 1 deletion content/blog/2023-11-17-argo-configmanagement-plugins.md
Original file line number Diff line number Diff line change
Expand Up @@ -133,7 +133,7 @@ cluster that will be running the demo can be discovered, so rather than requirin
mechanism that extracted that information and stored it as a Helm variable. Meanwhile, the components of industrial-edge
that used this information had very opinionated kustomize-based deployment mechanisms and workflows to update them.
We did not want to change this mechanism at the time, so it was better for us to work out how to apply Helm templating
on top of a set of of manifests that kustomize had already rendered. The CMP 1.0 framework was suitable for this, and
on top of a set of manifests that kustomize had already rendered. The CMP 1.0 framework was suitable for this, and
fairly straightforward to use, so we did. However, we did not, at that time, put any thought into parameterizing the
use of config management plugins; making too radical a change to how the repo server worked would have difficult, and
would have required injecting a new (and unsupported) image into a product; not something to be undertaken lightly.
Expand Down
2 changes: 1 addition & 1 deletion content/blog/2023-12-05-nutanix-testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,6 @@ Pattern consumers can now rest assured that the core pattern functionality will

This would not be possible without the wonderful co-operation of Nutanix, who are doing all the work of deploying OpenShift and our pattern on their platform, executing the tests, and reporting the results.

To facilitate this, the patterns team have begun the process of open sourcing the downstream tests for all our patterns. Soon all tests will live alongside the the patterns they target, allowing them to be easily executed and/or improved by pattern consumers and platform owners.
To facilitate this, the patterns team have begun the process of open sourcing the downstream tests for all our patterns. Soon all tests will live alongside the patterns they target, allowing them to be easily executed and/or improved by pattern consumers and platform owners.

Our thanks once again to Nutanix.
4 changes: 2 additions & 2 deletions content/blog/2024-01-26-more-secrets-options.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ loaded by the appropriate backend code.
Users of the pattern framework will be able to change secrets backends as straightforwardly
as we can make possible. The only other change the user will need to make (to use another
ESO backend) is to use the backend's mechanism to refer to keys. (For example: in Vault,
keys have have names like `secret/data/global/config-demo`; in the Kubernetes backend
keys have names like `secret/data/global/config-demo`; in the Kubernetes backend
it would just be the secret object name that's being used to store the secret material,
such as `config-demo`).

Expand Down Expand Up @@ -297,7 +297,7 @@ and running them.

`k8s_secret_utils` is used for loading both the `kubernetes` and `none` backends. It

### Changes to to vault_utils Ansible Role
### Changes to vault_utils Ansible Role

Some code has been factored out of `vault_utils` and now lives in roles called `cluster_pre_check` and
`find_vp_secrets` roles. A new task file has been added, `push_parsed_secrets.yaml` that knows how to use
Expand Down
2 changes: 1 addition & 1 deletion content/blog/2024-07-12-in-cluster-git.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ There are fundamentally two ways to set up the in-cluster gitea server.

## Configuration

Once the the in-gitea cluster is enabled, its configuration will be done via a normal argo application
Once the in-cluster gitea is enabled, its configuration will be done via a normal argo application
that can be seen in the cluster-wide argo:
![gitea-argo-application](/images/gitea-argocd-application.png)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ Here's an inventory of what gets deployed by the **Ansible Edge GitOps** pattern

The Ansible Edge GitOps pattern has been tested with a defined set of specifically tested configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture.

The Hub OpenShift Cluster is made up of the the following on the AWS deployment tested:
The Hub OpenShift Cluster is made up of the following on the AWS deployment tested:

| Node Type | Number of nodes | Cloud Provider | Instance Type
| :---- | :----: | :---- | :----
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ secrets:
chpasswd: { expire: False }
```

* A manifest file with an entitlement to run Ansible Automation Platform. This file (which will be a .zip file) will be posted to to Ansible Automation Platform instance to enable its use. Instructions for creating a manifest file can be found [here](https://www.redhat.com/en/blog/how-create-and-use-red-hat-satellite-manifest)
* A manifest file with an entitlement to run Ansible Automation Platform. This file (which will be a .zip file) will be posted to Ansible Automation Platform instance to enable its use. Instructions for creating a manifest file can be found [here](https://www.redhat.com/en/blog/how-create-and-use-red-hat-satellite-manifest)

```yaml
- name: aap-manifest
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -339,7 +339,7 @@ Click on the "three dots" menu on the right, which will open a dialog like the f

[![kubevirt411-vm-open-console](/images/ansible-edge-gitops/aeg-kubevirt411-con-ignition.png)](/images/ansible-edge-gitops/aeg-kubevirt411-con-ignition.png)

The virtual machine console view will either show a standard RHEL console login screen, or if the demo is working as designed, it will show the Ignition application running in kiosk mode. If the console shows a standard RHEL login, it can be accessed using the the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, the default cloudInit, or a hardcoded default which can be seen in the template [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml). On a VM created through the wizard or via `oc process` from a template, the password will be set on the VirtualMachine object in the `volumes` section.
The virtual machine console view will either show a standard RHEL console login screen, or if the demo is working as designed, it will show the Ignition application running in kiosk mode. If the console shows a standard RHEL login, it can be accessed using the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, the default cloudInit, or a hardcoded default which can be seen in the template [here](https://github.com/validatedpatterns/ansible-edge-gitops/blob/main/charts/hub/edge-gitops-vms/templates/virtual-machines.yaml). On a VM created through the wizard or via `oc process` from a template, the password will be set on the VirtualMachine object in the `volumes` section.

### Initial User login (cloud-user)

Expand Down
2 changes: 1 addition & 1 deletion content/patterns/devsecops/cluster-sizing.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ The hub can be modified to deploy OpenShift Pipelines if needed. See Development

The Secure Supply Chain pattern has been tested with a defined set of specifically tested configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture.

The Hub OpenShift Cluster is made up of the the following on the AWS deployment tested:
The Hub OpenShift Cluster is made up of the following on the AWS deployment tested:

| Node Type | Number of nodes | Cloud Provider | Instance Type
| :---- | :----: | :---- | :----
Expand Down
2 changes: 1 addition & 1 deletion content/patterns/devsecops/devel-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,4 +66,4 @@ There are a number of steps you can do to check that the components have deploye

## Next up

Deploy the the Multicluster DevSecOps [secured production cluster](/devsecops/production-cluster)
Deploy the Multicluster DevSecOps [secured production cluster](/devsecops/production-cluster)
2 changes: 1 addition & 1 deletion content/patterns/devsecops/ideas-for-customization.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,4 +32,4 @@ While this can be done with any of the patterns the Multicluster DevSecOps patte

1. `values-smart-signs.yaml`

GitOps and DevSecOps would be used to make sure that applications would be deployed on the correct clusters. Some of the "clusters" might be light single-node clusters. Some applications be be deployed to several cluster groups. E.g. the application to place information on a smart sign might also be deployed to the tram cars that also have smart signs in passenger compartments or the engineers compartment.
GitOps and DevSecOps would be used to make sure that applications would be deployed on the correct clusters. Some of the "clusters" might be light single-node clusters. Some applications can be deployed to several cluster groups. E.g. the application to place information on a smart sign might also be deployed to the tram cars that also have smart signs in passenger compartments or the engineers compartment.
2 changes: 1 addition & 1 deletion content/patterns/industrial-edge/demo-script.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ the latest product and technology improvements.

* Show Red Hat Operators being deployed
* Show available Red Hat Pipelines for the Industrial Edge pattern
* Show the seed pipeline running and explain what is is doing
* Show the seed pipeline running and explain what it is doing
* Demonstration of the Red Hat ArgoCD views
* Show the openshift-gitops-server view
* Show the datacenter-gitops-server view
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Here's an inventory of what gets deployed by the Multicloud GitOps pattern on th

The Multicloud GitOps pattern has been tested with a defined set of specifically tested configurations that represent the most common combinations that Red Hat OpenShift Container Platform (OCP) customers are using or deploying for the x86_64 architecture.

The datacenter hub OpenShift cluster is made up of the the following on the AWS deployment tested:
The datacenter hub OpenShift cluster is made up of the following on the AWS deployment tested:

| Node Type | Number of nodes | Cloud Provider | Instance Type
| :---- | :----: | :---- | :----
Expand Down
4 changes: 2 additions & 2 deletions content/patterns/omnicloud/getting-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ aliases: /omnicloud/getting-started/

### Glossary

- Red Hat Openshift Container Platform : OCP is an enterprise Kubernetes platform that enables organizations to build, deploy, and manage containerized applications at scale.
- Red Hat OpenShift Container Platform : OCP is an enterprise Kubernetes platform that enables organizations to build, deploy, and manage containerized applications at scale.
- Red Hat Ansible Automation Platform : AAP is an enterprise-grade automation solution that enables organizations to automate IT processes, application deployments, and infrastructure management across hybrid and multi-cloud environment
- Red Hat Advanced Cluster Management : centralized platform for managing multiple OpenShift clusters across on-premises, hybrid, and multi-cloud environments.
- Hub Cluster : Control plane cluster which deploys & manages OpenShift cluster on targeted cloud or on-prem environment.
Expand Down Expand Up @@ -280,7 +280,7 @@ For connected environments:
[https://console.redhat.com/openshift/downloads]
```

- Login to the Openshift cluster using:
- Login to the OpenShift cluster using:

```
$ oc login --token=<API token> --server=<API URL:6443>
Expand Down
32 changes: 16 additions & 16 deletions content/patterns/regional-dr/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ As more and more institution and mission critical organizations are moving
in the cloud, the possible impact of having a provider failure, might this be
only related to only one region, is very high.

This pattern is designed to prove the resiliency capabilities of Red Hat Openshift
This pattern is designed to prove the resiliency capabilities of Red Hat OpenShift
in such scenario.

The Regional Disaster Recovery Pattern, is designed to setup an multiple instances
of Openshift Container Platform cluster connectedbetween them to prove multi-region
resiliency by maintaing the application running in the event of a regional failure.
The Regional Disaster Recovery Pattern is designed to set up multiple instances
of OpenShift Container Platform cluster connected between them to prove multi-region
resiliency by maintaining the application running in the event of a regional failure.

In this scenario we will be working in a Regional Disaster Recovery setup, and the
synchronization parameters can be specified in the value file.
Expand Down Expand Up @@ -67,7 +67,7 @@ so consider this when designing your infrastructure deployment on the values
files of the pattern). This is the main reason because this RegionalDR is
configured in an Active-Passive mode.

It requires an already existing Openshift cluster, which will be used for installing the
It requires an already existing OpenShift cluster, which will be used for installing the
pattern, deploying active and passive clusters manage the application
scheduling.

Expand All @@ -85,32 +85,32 @@ clusters.

The _Regional DR Pattern_ leverages [Red Hat OpenShift Data Foundation][odf]'s
[Regional DR][rdr] solution, automating applications failover between
[Red Had Advanced Cluster Management][acm] managed clusters in different regions.
[Red Hat Advanced Cluster Management][acm] managed clusters in different regions.

- The pattern is kick-started by ansible and uses ACM to overlook and orchestrate the process
- The demo application uses MongoDB writing its data on a Persistent Volume Claim backe by ODF
- The demo application uses MongoDB writing its data on a Persistent Volume Claim backed by ODF
- We have developed a DR trigger which will be used to start the DR process
- The end user needs to configure which PV's need synchronization and the latencies
- ACS Can be used for eventual policies
- The clusters are connected by submariner and, to have a faster recovery time, we suggest having
hybernated clusters ready to be used

### Red Hat Technologies
- [Red Hat Openshift Container Platform][ocp]
- [Red Hat Openshift Data Foundation][odf]
- [Red Hat Openshift GitOps][ops]
- [Red Hat Openshift Advanced Cluster Management][acm]
- [Red Hat Openshift Advanced Cluster Security][acs]
- [Red Hat OpenShift Container Platform][ocp]
- [Red Hat OpenShift Data Foundation][odf]
- [Red Hat OpenShift GitOps][ops]
- [Red Hat OpenShift Advanced Cluster Management][acm]
- [Red Hat OpenShift Advanced Cluster Security][acs]

## Operators and Technologies this Pattern Uses
- [Regional DR Trigger Operator][opr]
- [Submariner][sub]

## Tested on

- Red Hat Openshift Container Platform v4.13
- Red Hat Openshift Container Platform v4.14
- Red Hat Openshift Container Platform v4.15
- Red Hat OpenShift Container Platform v4.13
- Red Hat OpenShift Container Platform v4.14
- Red Hat OpenShift Container Platform v4.15

## Architecture
This section explains the architecture deployed by this Pattern and its Logical
Expand All @@ -123,7 +123,7 @@ and Physical perspectives.


## Installation
This patterns is designed to be installed in an Openshift cluster which will
This pattern is designed to be installed in an OpenShift cluster which will
work as the orchestrator for the other clusters involved. The Adanced Cluster Manager
installed will neither run the applications nor store any data from them, but it
will take care of the plumbing of the various clusters involved,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,7 @@ Click on the "three dots" menu on the right, which will open a dialog like the f

[![show-vm-open-console](/images/virtualization-starter-kit/aeg-open-vm-console.png)](/images/virtualization-starter-kit/aeg-open-vm-console.png)

The virtual machine console view will show a standard RHEL console login screen. It can be accessed using the the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, or the default cloudInit.
The virtual machine console view will show a standard RHEL console login screen. It can be accessed using the initial user name (`cloud-user` by default) and password (which is what is specified in the Helm chart Values as either the password specific to that machine group, or the default cloudInit.

### Initial User login (cloud-user)

Expand Down