diff --git a/content/learn/getting-started-multi-cloud-gitops.adoc b/content/learn/getting-started-multi-cloud-gitops.adoc new file mode 100644 index 000000000..c678aab72 --- /dev/null +++ b/content/learn/getting-started-multi-cloud-gitops.adoc @@ -0,0 +1,311 @@ +--- +menu: + learn: + parent: Patterns quick start +title: Getting Started with Multicloud GitOps +aliases: /infrastructure/using-validated-pattern-operator/ +weight: 20 +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== Getting Started with Multicloud GitOps + +Multicloud GitOps is a foundational pattern that demonstrates GitOps principles for managing applications across multiple clusters. It provides: + +* A GitOps framework using `ArgoCD` +* Infrastructure-as-Code practices +* Multi-cluster management capabilities +* Template for secure secret management + +Red Hat recommend the Multicloud GitOps pattern as your base pattern because: + +. It establishes core GitOps practices +. Provides a minimal but complete implementation +. Serves as a foundation for other patterns +. Demonstrates key validated patterns concepts + +[NOTE] +==== +Other patterns build upon these concepts, making this an ideal starting point for your validated patterns journey. +==== + +== Deploying the Multicloud GitOps pattern + +.Prerequisites + +* An OpenShift cluster + ** To create an OpenShift cluster, go to the https://console.redhat.com/[Red Hat Hybrid Cloud console]. + ** Select *Services \-> Containers \-> Create cluster*. + ** The cluster must have a dynamic `StorageClass` to provision `PersistentVolumes`. Verify that a dynamic `StorageClass` exists before creating one by running the following command: ++ +[source,terminal] +---- +$ oc get storageclass -o custom-columns=NAME:.metadata.name,PROVISIONER:.provisioner,DEFAULT:.metadata.annotations."storageclass\.kubernetes\.io/is-default-class" +---- ++ +.Example output ++ +[source,terminal] +---- +NAME PROVISIONER DEFAULT +gp2-csi ebs.csi.aws.com +gp3-csi ebs.csi.aws.com true + +---- + +* Optional: A second OpenShift cluster for multicloud demonstration. +//Replaced git and podman prereqs with the tooling dependencies page +* https://validatedpatterns.io/learn/quickstart/[Install the tooling dependencies]. + +The use of this pattern depends on having at least one running Red Hat OpenShift cluster. However, consider creating a cluster for deploying the GitOps management hub assets and a separate cluster for the managed cluster. + +If you do not have a running Red Hat OpenShift cluster, you can start one on a +public or private cloud by using https://console.redhat.com/openshift/create[Red Hat Hybrid Cloud Console]. + +.Procedure + +. From the https://github.com/validatedpatterns/multicloud-gitops[multicloud-gitops] repository on GitHub, click the Fork button. + +. Clone the forked copy of this repository by running the following command. ++ +[source,terminal] +---- +$ git clone git@github.com:/multicloud-gitops.git +---- + +. Navigate to your repository: Ensure you are in the root directory of your Git repository by using: ++ +[source,terminal] +---- +$ cd /path/to/your/repository +---- + +. Run the following command to set the upstream repository: ++ +[source,terminal] +---- +$ git remote add -f upstream git@github.com/validatedpatterns/multicloud-gitops.git +---- + +. Verify the setup of your remote repositories by running the following command: ++ +[source,terminal] +---- +$ git remote -v +---- ++ +.Example output ++ +[source,terminal] +---- +origin git@github.com:/multicloud-gitops.git (fetch) +origin git@github.com:/multicloud-gitops.git (push) +upstream https://github.com/validatedpatterns/multicloud-gitops.git (fetch) +upstream https://github.com/validatedpatterns/multicloud-gitops.git (push) +---- + +. Create a local copy of the secret values file that can safely include credentials. Run the following commands: ++ +[source,terminal] +---- +$ cp values-secret.yaml.template ~/values-secret-multicloud-gitops.yaml +---- ++ +[NOTE] +==== +Putting the `values-secret.yaml` in your home directory ensures that it does not get pushed to your git repository. It is based on the `values-secrets.yaml.template` file provided by the pattern in the top level directory. When you create your own patterns you will add your secrets to this file and save. At the moment the focus is on getting started and familiar with this base Multicloud GitOps pattern. +==== + +. Create a new feature branch, for example `my-branch` from the `main` branch for your content: ++ +[source,terminal] +---- +$ git checkout -b my-branch main +---- + +. Create a local branch and push it to origin to gain the flexibility needed to customize the base Multicloud GitOps by running the following command: ++ +[source,terminal] +---- +$ git push origin my-branch +---- + +You can proceed to install the Multicloud GitOps pattern by using the web console or from command line by using the script `./pattern.sh` script. + +To install the Multicloud GitOps pattern by using the web console you must first install the Validated Patterns Operator. The Validated Patterns Operator installs and manages Validated Patterns. + +//Include Procedure module here +[id="installing-validated-patterns-operator_{context}"] +== Installing the {validated-patterns-op} using the web console + +.Prerequisites +* Access to an {ocp} cluster by using an account with `cluster-admin` permissions. + +.Procedure + +. Navigate in the {hybrid-console-first} to the *Operators* → *OperatorHub* page. + +. Scroll or type a keyword into the *Filter by keyword* box to find the Operator you want. For example, type `validated patterns` to find the {validated-patterns-op}. + +. Select the Operator to display additional information. ++ +[NOTE] +==== +Choosing a Community Operator warns that Red Hat does not certify Community Operators; you must acknowledge the warning before continuing. +==== + +. Read the information about the Operator and click *Install*. + +. On the *Install Operator* page: + +.. Select an *Update channel* (if more than one is available). + +.. Select a *Version* (if more than one is available). + +.. Select an *Installation mode*: +*** *All namespaces on the cluster (default)* installs the Operator in the default `openshift-operators` namespace to watch and be made available to all namespaces in the cluster. This option is not always available. +*** *A specific namespace on the cluster* allows you to choose a specific, single namespace in which to install the Operator. The Operator will only watch and be made available for use in this single namespace. + +.. Select *Automatic* or *Manual* approval strategy. + +. Click *Install* to make the Operator available to the selected namespaces on this {ocp} cluster. + +.Verification +To confirm that the installation is successful: + +. Navigate to the *Operators* → *Installed Operators* page. + +. Check that the Operator is installed in the selected namespace and its status is `Succeeded`. + +//Include Procedure module here +[id="create-pattern-instance_{context}"] +== Creating the Multicloud GitOps instance + +.Prerequisites +The {validated-patterns-op} is successfully installed in the relevant namespace. + +.Procedure + +. Navigate to the *Operators* → *Installed Operators* page. + +. Click the installed *{validated-patterns-op}*. + +. Under the *Details* tab, in the *Provided APIs* section, in the +*Pattern* box, click *Create instance* that displays the *Create Pattern* page. + +. On the *Create Pattern* page, select *Form view* and enter information in the following fields: + +** *Name* - A name for the pattern deployment that is used in the projects that you created. +** *Labels* - Apply any other labels you might need for deploying this pattern. +** *Cluster Group Name* - Select a cluster group name to identify the type of cluster where this pattern is being deployed. For example, if you are deploying the {ie-pattern}, the cluster group name is `datacenter`. If you are deploying the {mcg-pattern}, the cluster group name is `hub`. ++ +To know the cluster group name for the patterns that you want to deploy, check the relevant pattern-specific requirements. +. Expand the *Git Config* section to reveal the options and enter the required information. +. Leave *In Cluster Git Server* unchanged. +.. Change the *Target Repo* URL to your forked repository URL. For example, change `+https://github.com/validatedpatterns/+` to `+https://github.com//+` +.. Optional: You might need to change the *Target Revision* field. The default value is `HEAD`. However, you can also provide a value for a branch, tag, or commit that you want to deploy. For example, `v2.1`, `main`, or a branch that you created, `my-branch`. +. Click *Create*. ++ +[NOTE] +==== +A pop-up error with the message "Oh no! Something went wrong." might appear during the process. This error can be safely disregarded as it does not impact the installation of the Multicloud GitOps pattern. Use the Hub ArgoCD UI, accessible through the nines menu, to check the status of ArgoCD instances, which will display states such as progressing, healthy, and so on, for each managed application. The Cluster ArgoCD provides detailed status on each application, as defined in the clustergroup values file. +==== + +The {rh-gitops} Operator displays in list of *Installed Operators*. The {rh-gitops} Operator installs the remaining assets and artifacts for this pattern. To view the installation of these assets and artifacts, such as {rh-rhacm-first}, ensure that you switch to *Project:All Projects*. + +When viewing the `config-demo` project through the Hub `ArgoCD` UI from the nines menu, it appears stuck in a Degraded state. This is the expected behavior when installing using the OpenShift Container Platform console. + +* To resolve this you need to run the following to load the secrets into the vault: ++ +[source,terminal] +---- +$ ./pattern.sh make load-secrets +---- ++ +[NOTE] +==== +You must have created a local copy of the secret values file by running the following command: + +[source,terminal] +---- +$ cp values-secret.yaml.template ~/values-secret-multicloud-gitops.yaml +---- +==== + +The deployment will not take long but it should deploy successfully. + +Alternatively you can deploy the Multicloud GitOps pattern by using the command line script `pattern.sh`. + +[id="deploying-cluster-using-patternsh-file"] +== Deploying the cluster by using the pattern.sh file + +To deploy the cluster by using the `pattern.sh` file, complete the following steps: + +. Log in to your cluster by running the following command: ++ +[source,terminal] +---- +$ oc login +---- ++ +Optional: Set the `KUBECONFIG` variable for the `kubeconfig` file path: ++ +[source,terminal] +---- +$ export KUBECONFIG=~/ +---- + +. Deploy the pattern to your cluster. Run the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make install +---- + +. Verify that the Operators have been installed. + .. To verify, in the OpenShift Container Platform web console, navigate to *Operators → Installed Operators* page. + .. Check that the Operator is installed in the `openshift-operators` namespace and its status is `Succeeded`. +. Verify that all applications are synchronized. Under the project `multicloud-gitops-hub` click the URL for the `hub` gitops `server`. The Vault application is not synched. ++ +image::multicloud-gitops/multicloud-gitops-argocd.png[Multicloud GitOps Hub] + +As part of installing by using the script `pattern.sh` pattern, HashiCorp Vault is installed. Running `./pattern.sh make install` also calls the `load-secrets` makefile target. This `load-secrets` target looks for a YAML file describing the secrets to be loaded into vault and in case it cannot find one it will use the `values-secret.yaml.template` file in the git repository to try to generate random secrets. + +For more information, see section on https://validatedpatterns.io/secrets/vault/[Vault]. + +.Verification of test pages + +Verify that the *hello-world* application deployed successfully as follows: + +. Navigate to the *Networking* -> *Routes* menu options. + +. From the *Project:* drop down select the *hello-world* project. + +. Click the *Location URL*. This should reveal the following: ++ +[source,terminal] +---- +Hello World! + +Hub Cluster domain is 'apps.aws-hub-cluster.openshift.org' +Pod is running on Local Cluster Domain 'apps.aws-hub-cluster.openshift.org' +---- + +Verify that the *config-demo* application deployed successfully as follows: + +. Navigate to the *Networking* -> *Routes* menu options. + +. Select the *config-demo* *Project*. + +. Click the *Location URL*. This should reveal the following: ++ +[source,terminal] +---- +Hub Cluster domain is 'apps.aws-hub-cluster.openshift.org' +Pod is running on Local Cluster Domain 'apps.aws-hub-cluster.openshift.org' +The secret is `secret` +---- diff --git a/content/learn/getting-started-secret-management.adoc b/content/learn/getting-started-secret-management.adoc new file mode 100644 index 000000000..04c3b8128 --- /dev/null +++ b/content/learn/getting-started-secret-management.adoc @@ -0,0 +1,262 @@ +--- +menu: + learn: + parent: Patterns quick start +title: Configuring secrets +aliases: /infrastructure/using-validated-pattern-operator/ +weight: 21 +--- + +:toc: +:imagesdir: /images +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +== What are secrets + +Sensitive information referred to as secrets should not be exposed publicly or handled insecurely. This can include passwords, private keys, certificates (particularly the private parts), database connection strings, and other confidential data. + +A simple way to think of secrets is as anything that security teams or responsible system administrators would ensure stays protected and not published in a public space. + +Secrets are crucial for the functioning of applications for example database passwords or cache keys. Without access to these secrets, applications might fail or operate in a significantly impaired manner. + +Secrets often vary between different deployments of the same application for example separate load balancer certificates for different instances. Using the same secret across multiple deployments is generally discouraged as it increases the risk of exposure + +Applications often need secrets to run correctly, making them indispensable. Removing or mishandling secrets can disrupt operations. + +== How Validated Patterns implements secrets management + +Validated Patterns supports the tokenization approach for secret management. Tokenization involves keeping actual secret values out of version control (for example git) by using tokens or references that can pull secrets from secure storage during runtime. An external storage system pulls the real secrets at runtime. + +This approach requires integration with external secret management systems some examples of which are HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, and CyberArk's Conjur. + +The External Secrets Operator (ESO) is integral to the validated patterns framework, enabling secure secret management by fetching secrets from various secret stores and projecting them into Kubernetes namespaces. ESO supports integration with providers such as HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, GCP, IBM Secrets Manager, and others. + +ESO + +* Supports a range of secret providers, ensuring no vendor lock-in. +* Keeps secrets out of version-controlled repositories, using token references in Git instead. +* Allows teams to manage secrets securely while maintaining efficient Git workflows. + +[NOTE] +==== +As of November 15, 2024, ESO is not officially supported by Red Hat as a product. +==== + +ESO's custom file format and utilities streamlines secret management by allowing file references and supporting encrypted secret storage. The design prioritizes security through multi-layer encryption and simplifies key management. In particular the ini key type is especially helpful for handling AWS credentials, where mismanagement could lead to unauthorized use and potential financial or operational issues. + +Validated Patterns primary backend secret store is HashiCorp Vault. HashiCorp Vault acts as a centralized service for securely managing secrets, such as passwords, API keys, and certificates. + +Unlike other secret management systems tied to specific cloud providers for example AWS Secrets Manager or Azure Key Vault, Vault can be deployed across different clouds, on bare-metal systems, and in hybrid environments. This cross-platform support made it a popular and practical choice for maintaining a consistent secrets management strategy. + +== Configuring Secrets + +Secret management in validated patterns follows GitOps best practices while maintaining security. Here's how to configure your secrets: + +=== Using Vault for Secret Management + +. Access the Vault instance deployed by the pattern. + +.. Click the nine box in the UI, choose the *Vault* and you are taken to the Vault’s UI. + +.. Log in with the root token from the vaultkeys secret in the imperative space. Retrieve this be running the following command: ++ +[source,bash] +---- +$ oc extract -n imperative secret/vaultkeys --to=- --keys=vault_data_json 2>/dev/null | jq -r ".root_token" +---- + +=== Adding a Secret to the Multicloud GitOps Pattern + +Follow these steps add a new secret to your forked local branch: + +. Navigate to the Multicloud GitOps pattern repository by running the following command: ++ +[source,terminal] +---- +$ cd +---- + +. Switch to the branch you created in "Getting Started with Multicloud GitOps" by running the following command: ++ +[source,terminal] +---- +$ git checkout my-branch +---- + +. Edit the existing `~/values-secret-multicloud-gitops.yaml` ++ +[source,terminal] +---- +$ vi ~/values-secret-multicloud-gitops.yaml +---- + +. Add the following block to define a new top-level secret called `mysecret`: ++ +[source,yaml] +---- +secrets: + - name: mysecret + vaultPrefixes: + - global + fields: + - name: foo + onMissingValue: generate + - name: bar + onMissingValue: generate +---- + +. Load the secrets into the Vault by running the following command: ++ +[source,terminal] +---- +$ ./pattern.sh make load-secrets +---- + +. Verify the secret in the Vault UI. + +.. Access the Vault's web UI. + +.. From the Dashboard menu navigate to the `secret/` secrets engine where your secrets are stored. + +.. Expand the `global` folder. + +.. Verify that the `mysecret` entry exists and contains the `foo` and `bar` fields with auto-generated values. + +=== Creating a new external secret in OpenShift GitOps + +Follow these steps to create and deploy a new external secret in your GitOps repository. + +. Navigate to the `charts/all/config-demo/templates` directory in your repository: ++ +[source,terminal] +---- +$ cd charts/all/config-demo/templates +---- + +. Create a new YAML file named `mysecret-external-secret.yaml`: ++ +[source,terminal] +---- +$ vi mysecret-external-secret.yaml +---- + +. Open the file in your preferred text editor: ++ +[source,terminal] +---- +$ vi mysecret-external-secret.yaml +---- + +. Add the following content to define a new external secret using the format of the existing template: ++ +[source,yaml] +---- +--- +apiVersion: "external-secrets.io/v1beta1" +kind: ExternalSecret +metadata: + name: config-demo-mysecret <1> + namespace: config-demo +spec: + refreshInterval: 15s <2> + secretStoreRef: <3> + name: {{ .Values.secretStore.name }} + kind: {{ .Values.secretStore.kind }} + target: + name: config-demo-mysecret + template: + type: Opaque + dataFrom: <4> + - extract: + key: {{ .Values.configdemomysecret.key }} +---- ++ +<1> Specifies the name of the new secret to be created in the `config-demo` namespace. +<2> Sets how frequently the external secret is refreshed. +<3> References the Vault or secret store as defined in the Helm values. +<4> Uses `extract` to source all key-value pairs from the specified key in the Vault. + +. Edit the chart's `values.yaml` file to reflect this new external secret: ++ +[source,terminal] +---- +$ vi ~/multicloud-gitops/charts/all/config-demo/values.yaml +---- + +.. Add the following content: ++ +[source,yaml] +---- +configdemomysecret: + key: secret/data/global/config-demo +---- + +. Add the new file to git: ++ +[source,terminal] +---- +$ git add . +---- + +.. Commit your changes: ++ +[source,terminal] +---- +$ git commit -m "Added mysecret-external-secret to create mysecret-secret in config-demo" +---- + +.. Push your branch to the origin of your fork: ++ +[source,terminal] +---- +$ git push origin my-branch +---- + +. Ensure that ArgoCD is monitoring the `charts/all/config-demo` directory. + +. Wait for ArgoCD to synchronize and apply the new changes. You can observe the synchronization status in the ArgoCD web UI. ++ +The new `config-demo-mysecret` should be created and visible in the `config-demo` project, populated with the relevant data extracted from the Vault. + +. Verify the secret in the Cluster: + +.. Once ArgoCD has applied the changes, verify that the `config-demo-mysecret` has been created in the `config-demo` namespace: ++ +[source,terminal] +---- +$ oc get secret config-demo-mysecret -n config-demo +---- + +.. Check the contents of the secret if necessary: ++ +[source,terminal] +---- +$ oc describe secret config-demo-mysecret -n config-demo +---- ++ +.Expected output ++ +[source,terminal] +---- +NAME TYPE DATA AGE +config-demo-mysecret Opaque 1 25s +---- + +.. In the OpenShift Container Platform web console, select the *config-demo* *Project*. + +.. Select the *config-demo-mysecret* to review the secret details. ++ +image::multicloud-gitops/config-demo-mysecret.png[Secret details] + +== Next Steps + +* Explore the deployed components in your OpenShift console +* Review the GitOps repositories created by the pattern +* Try modifying the configuration to understand the GitOps workflow +* Consider exploring other validated patterns that build on this foundation + +[IMPORTANT] +==== +Remember to consult the official documentation at link:https://validatedpatterns.io/[Validated Patterns] for detailed information about specific features and advanced configurations. +==== \ No newline at end of file diff --git a/content/learn/importing-a-cluster.adoc b/content/learn/importing-a-cluster.adoc index ba5798ace..9373e1513 100644 --- a/content/learn/importing-a-cluster.adoc +++ b/content/learn/importing-a-cluster.adoc @@ -24,17 +24,21 @@ To deploy a cluster that can be imported into RHACM, use the `openshift-install` == Importing a cluster using the RHACM User Interface -=== Getting to the RHACM user interface +After ACM is installed a message regarding a "Web console update is available" is displayed. Follow this guidance to import a cluster: -After ACM is installed a message regarding a "Web console update is available" will be displayed. Click on the "Refresh web console" link. +. Access the the RHACM user interface by clicking the "Refresh web console" link. -On the upper-left side you'll see a pull down labeled "local-cluster". Select "All Clusters" from this pull down. This will navigate to the RHACM console and to its "Clusters" section +. On the upper-left side you'll see a pull down labeled "local-cluster". Select "All Clusters" from this pull down. This will navigate to the RHACM console and to its "Clusters" section -Select the "Import cluster" option. +. Select the "Import an existing cluster" option. -=== Importing the cluster +. On the "Import an existing cluster" page, enter the cluster name (arbitrary) and choose `Kubeconfig` as the `Import mode`. -On the "Import an existing cluster" page, enter the cluster name (arbitrary) and choose Kubeconfig as the "import mode". Add the tag `clusterGroup=` using the appropriate cluster group specified in the pattern. Press `Import`. +. Add the Additional label `clusterGroup=` using the appropriate cluster group specified in the pattern. + +. Click `Next`. Optionally choose an automation template to run Ansible jobs at different stages of the cluster lifecycle. + +. Click `Next` and on the review screen click `Import` to successfully import the cluster. Using this method, you are done. Skip to the section in your pattern documentation that describes how you can confirm the pattern deployed correctly on the managed cluster. diff --git a/content/learn/quickstart.adoc b/content/learn/quickstart.adoc index da96fa531..64d2f68c8 100644 --- a/content/learn/quickstart.adoc +++ b/content/learn/quickstart.adoc @@ -2,84 +2,58 @@ layout: default title: Patterns quick start menu: learn -weight: 40 +weight: 20 --- :toc: :_content-type: ASSEMBLY include::modules/comm-attributes.adoc[] -== Patterns quick start +== Patterns quick start overview -Each pattern can be deployed using the command line. The only requirement is to have `git` and `podman` installed. See the <> for more information. +This validated pattern quickstart offers a streamlined guide to deploying predefined, reliable configurations and applications, ensuring they meet established standards. It provides step-by-step instructions on setup, prerequisites, and configuration, enabling administrators to deploy tested, supportable patterns quickly. These patterns simplify complex deployments by applying reusable configurations suited to various infrastructure and application needs, allowing users to efficiently deploy, manage, and scale applications with GitOps. This approach also reduces the risks and time associated with custom configurations. -Patterns deployment requires several tools including Helm to install. However, the validated patterns framework removes the need to install and maintain these tools. The `pattern.sh` script uses a container which includes the necessary tools. The use of that container is why you need to install `podman`. +Validated patterns can be deployed using either the OpenShift-based Validated Patterns framework or the Ansible GitOps Framework (AGOF). The OpenShift-based validated patterns framework is the most common method for deploying applications and infrastructure on the OpenShift Container Platform. It offers a set of predefined configurations and patterns that follow best practices and are validated by Red Hat. -Check the `values-\*.yaml` for changes that are needed before deployment. After changing the `values-*.yaml` files where needed and pushing them to your git repository, you can run `./pattern.sh make install` from your local repository directory and that will deploy the datacenter/hub cluster for a pattern. Edge clusters are deployed by joining/importing them into ACM on the hub. +== Getting Started with Validated Patterns -Alternatively to the `./pattern.sh make install` method, you can use the https://operatorhub.io/operator/patterns-operator[validated pattern operator] available in the OpenShift console. +This guide steps you through the process of deploying your first validated pattern on an OpenShift cluster. By the end of this guide, you'll have a working instance of the Multicloud GitOps pattern, which serves as an excellent foundation for exploring other patterns. -For information on using the Validated Patterns Operator, see link:/infrastructure/using-validated-pattern-operator/[Using the Validated Pattern Operator]. +=== What You'll Learn -Follow any other post-install instructions for the pattern on that pattern’s Getting started page. +. Setting up prerequisites for validated patterns +. Installing and configuring the Validated Patterns Operator +. Deploying the Multicloud GitOps pattern +. Managing secrets and configurations +== Prerequisites -== Prerequisite installation instructions [[installation_prerequisites]] +Before beginning, ensure you have the following: -== Tested Operating systems -The following instructions have been tested on the following operating systems: +=== OpenShift Cluster Requirements -* Red Hat Enterprise Linux 8 and 9 -* CentOS 8 and 9 -* Fedora 36 and onwards -* Debian Bookworm -* Ubuntu 22.04 -* Mac OSX Big Sur and onwards +* A running OpenShift 4.12 or later +* Cluster-admin privileges +* At least 8 CPU cores available +* Minimum 16GB RAM available -=== Red Hat Enterprise Linux 8 and 9 -Make sure that you have both the `appstream` and the `baseos` repositories configured. -For example on RHEL 8 you will get the following: +=== Storage Requirements -[source,terminal] ----- -sudo dnf repolist -Updating Subscription Management repositories. -repo id repo name -rhel-8-for-x86_64-appstream-rpms Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs) -rhel-8-for-x86_64-baseos-rpms Red Hat Enterprise Linux 8 for x86_64 - BaseOS (RPMs) ----- +* A default storage class configured for dynamic provisioning +* At least 10GB of available storage -Install `podman` and `git`: +=== Network Requirements -[source,terminal] ----- -sudo dnf install -y podman git ----- +.For connected environments: -=== Fedora -Install `podman` and `git`: +* Access to public container registries +* Access to GitHub repositories -[source,terminal] ----- -sudo dnf install -y podman git ----- +.For disconnected environments: -=== Debian and derivatives -Install `podman` and `git`: +* One or more openshift clusters deployed in a disconnected network +* An OCI-compliant registry that is accessible from the disconnected network +* A Git Repository that is accessible from the disconnected network -[source,terminal] ----- -sudo apt-get install -y podman git ----- +For more information on disconnected installation, see link:/blog/2024-10-12-disconnected/[Validated Patterns in a disconnected Network]. -=== Mac OSX -Install `podman` and `git`: - -[source,terminal] ----- -/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -brew install podman git -# Containers on MacOSX run in a VM which is managed by "podman machine" commands -podman machine init -v ${HOME}:${HOME} -v /private/tmp/:/private/tmp -podman machine start ----- diff --git a/content/learn/validated_patterns_frameworks.adoc b/content/learn/validated_patterns_frameworks.adoc index 6198e5d49..5dde427c8 100644 --- a/content/learn/validated_patterns_frameworks.adoc +++ b/content/learn/validated_patterns_frameworks.adoc @@ -1,7 +1,7 @@ --- menu: learn title: Validated patterns frameworks -weight: 20 +weight: 30 aliases: /validated-patterns-frameworks/ --- @@ -18,7 +18,4 @@ The OpenShift-based validated patterns framework is the most common method for d Ansible GitOps Framework (AGOF) is an alternative framework, designed to provide a framework for GitOps without Kubernetes. AGOF is not a pattern itself; it is a framework for installing Ansible Automation Platform (AAP), and then using that as the GitOps engine to drive other pattern work. AGOF comes with code to install VMs in AWS, if desired, or else it can work with previously provisioned VMs, or a functional AAP Controller endpoint. -The goal with either framework, is that developers, operators, security, and architects build a secure and repeatable day one deployment mechanism and maintenance automation for day two operations. - - - +The goal with either framework, is that developers, operators, security, and architects build a secure and repeatable day one deployment mechanism and maintenance automation for day two operations. \ No newline at end of file diff --git a/content/learn/vp_agof.adoc b/content/learn/vp_agof.adoc index 9be958acd..9c1b52401 100644 --- a/content/learn/vp_agof.adoc +++ b/content/learn/vp_agof.adoc @@ -12,7 +12,7 @@ aliases: /ocp-framework/agof/ :_content-type: ASSEMBLY include::modules/comm-attributes.adoc[] -== About the Ansible GitOps framework (AGOF)for validated patterns +== About the Ansible GitOps framework (AGOF) for validated patterns The link:/patterns/ansible-gitops-framework/[Ansible GitOps Framework] provides an extensible framework to do GitOps with https://docs.ansible.com/platform.html[Ansible Automation Platform] (AAP). It offers useful facilities for developing patterns (community and validated) that work with AAP as the GitOps engine. @@ -256,7 +256,7 @@ This command invokes the `controller_configuration` `dispatch` role on the contr .Verification -The default installation provides an AAP 2.4 installation deployed using the containerized installer, with services deployed this way: +The default installation provides an AAP 2.4 installation deployed by using the containerized installer, with services deployed this way: .agof_vault settings [cols="30%,70%",options="header"] @@ -274,7 +274,7 @@ a| EDA Automation Controller |=== -Once the install completes, you will have a project, an inventory (consisting of the AAP controller), a credential (the private key from ec2), a job template (which runs a fact gather on the AAP controller) and a schedule that will run the job template every 5 minutes, +Once the install completes, you will have a project, an inventory (consisting of the AAP controller), a credential (the private key from ec2), a job template (which runs a fact gather on the AAP controller) and a schedule that will run the job template every 5 minutes. . Log in to `https://aap.{{ ec2_name_prefix }}.{{ domain }}:8443` with the username `admin` and the password as configured in `admin_password` field of `agof_vault.yml`. @@ -294,7 +294,7 @@ In this method, you provide an existing Ansible Automation Platform (AAP) Contro You supply the manifest contents, endpoint hostname, admin username (defaults to "admin"), and admin password, and then the installation hands off to a `controller_config_dir` you define. -* Run the following command to install using this method: +* Run the following command to install by using this method: + [source,terminal] ---- diff --git a/content/patterns/ansible-gitops-framework/_index.md b/content/patterns/ansible-gitops-framework/_index.md index 713ccff3a..0a947ddfd 100644 --- a/content/patterns/ansible-gitops-framework/_index.md +++ b/content/patterns/ansible-gitops-framework/_index.md @@ -31,3 +31,5 @@ The Pattern is then expressed as an Infrastructure as Code repository, which wil - Red Hat Ansible Automation Platform (formerly known as "Ansible Tower") - Red Hat Enterprise Linux + +For more information and guidance on how to use the AGOF framework, see [About the Ansible GitOps framework (AGOF) for validated patterns](https://validatedpatterns.io/learn/vp_agof/). diff --git a/static/images/multicloud-gitops/config-demo-mysecret.png b/static/images/multicloud-gitops/config-demo-mysecret.png new file mode 100644 index 000000000..5f5940747 Binary files /dev/null and b/static/images/multicloud-gitops/config-demo-mysecret.png differ