diff --git a/argo_rollouts/argo-rollouts-overview.adoc b/argo_rollouts/argo-rollouts-overview.adoc index 7552a87a213f..f04d554e1ba1 100644 --- a/argo_rollouts/argo-rollouts-overview.adoc +++ b/argo_rollouts/argo-rollouts-overview.adoc @@ -21,6 +21,12 @@ include::modules/gitops-about-argo-rollout-manager-custom-resources-and-spec.ado // Argo Rollouts architecture overview include::modules/gitops-argo-rollouts-architecture-overview.adoc[leveloffset=+1] +// Argo Rollouts components +include::modules/gitops-argo-rollouts-components.adoc[leveloffset=+2] + +// Argo Rollouts resources +include::modules/gitops-argo-rollouts-resources.adoc[leveloffset=+2] + // Argo Rollouts CLI overview include::modules/gitops-argo-rollouts-cli-overview.adoc[leveloffset=+1] diff --git a/declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc b/declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc index a5ee5021209f..b8b21679bba2 100644 --- a/declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc +++ b/declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc @@ -59,12 +59,24 @@ include::modules/gitops-additional-permissions-for-cluster-config.adoc[leveloffs * xref:../declarative_clusterconfig/customizing-permissions-by-creating-user-defined-cluster-roles-for-cluster-scoped-instances.adoc#customizing-permissions-by-creating-user-defined-cluster-roles-for-cluster-scoped-instances[Customizing permissions by creating user-defined cluster roles for cluster-scoped instances] * xref:../declarative_clusterconfig/customizing-permissions-by-creating-aggregated-cluster-roles.adoc#customizing-permissions-by-creating-aggregated-cluster-roles[Customizing permissions by creating aggregated cluster roles] -// Installing OLM Operators using Red Hat OpenShift GitOps -include::modules/gitops-installing-olm-operators-using-gitops.adoc[leveloffset=+1] +// Install OLM Operators using Red Hat OpenShift GitOps +include::modules/gitops-install-olm-operators-using-gitops.adoc[leveloffset=+1] + +// Installing cluster-scoped Operators +include::modules/gitops-installing-cluster-scoped-operators.adoc[leveloffset=+2] + +// Installing namespace-scoped Operators +include::modules/gitops-namespace-scoped-operators.adoc[leveloffset=+2] // Configuring respectRBAC using Red Hat OpenShift GitOps include::modules/gitops-configuring-respect-rbac-using-gitops.adoc[leveloffset=+1] +// Configuring respectRBAC using the CLI +include::modules/gitops-configuring-respect-rbac-using-the-cli.adoc[leveloffset=+2] + +// Configuring respectRBAC by using the web console +include::modules/gitops-configuring-respect-rbac-using-the-web-console.adoc[leveloffset=+2] + [role="_additional-resources"] [id="additional-resources_{context}"] == Additional resources diff --git a/declarative_clusterconfig/customizing-permissions-by-creating-aggregated-cluster-roles.adoc b/declarative_clusterconfig/customizing-permissions-by-creating-aggregated-cluster-roles.adoc index 64b8c12e0c26..8cc86118b2af 100644 --- a/declarative_clusterconfig/customizing-permissions-by-creating-aggregated-cluster-roles.adoc +++ b/declarative_clusterconfig/customizing-permissions-by-creating-aggregated-cluster-roles.adoc @@ -42,8 +42,15 @@ With {gitops-title} 1.14 and later, as a cluster administrator, you can use aggr // Creating aggregated cluster roles include::modules/gitops-creating-aggregated-cluster-roles.adoc[leveloffset=+1] +// Enable the creation of aggregated cluster roles +include::modules/gitops-enable-creation-of-aggregated-cluster-roles.adoc[leveloffset=+2] + +// Create user-defined cluster roles and configure user-defined permissions +include::modules/gitops-create-configure-aggregated-user-defined-permissions.adoc[leveloffset=+2] + // Enabling the creation of aggregated cluster roles include::modules/gitops-enabling-the-creation-of-aggregated-cluster-roles.adoc[leveloffset=+1] + [role="_additional-resources"] .Additional resources * xref:../argocd_instance/setting-up-argocd-instance.adoc#setting-up-argocd-instance[Installing a user-defined Argo CD instance] diff --git a/modules/gitops-argo-rollouts-architecture-overview.adoc b/modules/gitops-argo-rollouts-architecture-overview.adoc index 0aa07183cc84..5a05ceeff7c4 100644 --- a/modules/gitops-argo-rollouts-architecture-overview.adoc +++ b/modules/gitops-argo-rollouts-architecture-overview.adoc @@ -20,48 +20,4 @@ The architecture of Argo Rollouts is structured into components and resources. C Argo Rollouts include several mechanisms to gather analysis metrics to verify that a new application version is deployed: * *Prometheus metrics*: The `AnalysisTemplate` CR is configured to connect to Prometheus instances to evaluate the success or failure of one or more metrics. -* *Kubernetes job metrics*: Argo Rollouts support the Kubernetes `Job` resource to run analysis on resource metrics. You can verify a successful deployment of an application based on the successful run of Kubernetes jobs. - -[id="gitops-argo-rollouts-components_{context}"] -== Argo Rollouts components - -Argo Rollouts consists of several components that enable users to practice progressive delivery in {OCP}. - -.Argo Rollouts components -[options="header"] -|=== -|Name |Description -|Argo Rollouts controller |The Argo Rollouts Controller is an alternative to the standard `Deployment` resource and coexists alongside it. This controller only responds to changes in the Argo Rollouts resources and manages the `Rollout` CR. The Argo Rollouts Controller does not modify standard deployment resources. -|AnalysisRun controller |The AnalysisRun controller manages and performs analysis for `AnalysisRun` and `AnalysisTemplate` CRs. It connects a rollout to the metrics provider and defines thresholds for metrics that determine if a deployment update is successful for your application. -|`Experiment controller` | The `Experiment` controller runs analysis on short-lived replica sets, and manages the `Experiment` custom resource. The controller can also be integrated with the `Rollout` resource by specifying the `experiment` step in the canary deployment `strategy` field. -|`Service` and `Ingress` controller |The Service controller manages the `Service` resources and the Ingress controller manages the `Ingress` resources modified by Argo Rollouts. These controllers inject additional metadata annotations in the application instances for traffic management. -|Argo Rollouts CLI and UI |Argo Rollouts supports an `oc/kubectl` plugin called Argo Rollouts CLI. You can use it to interact with resources, such as rollouts, analyses, and experiments, from the command line. It can perform operations, such as `pause`, `promote`, or `retry`. The Argo Rollouts CLI plugin can start a local web UI dashboard in the browser to enhance the experience of visualizing the Argo Rollouts resources. -|=== - -[id="gitops-argo-rollouts-resources_{context}"] -== Argo Rollouts resources - -Argo Rollout components manage several resources to enable progressive delivery: - -* *Rollouts-specific resources*: For example, `Rollout`, `AnalysisRun`, or `Experiment`. -* *Kubernetes networking resources*: For example, `Service`, `Ingress`, or `Route` for network traffic shaping. Argo Rollouts integrate with these resources, which are referred to as traffic management. - -These resources are essential for customizing the deployment of applications through the `Rollout` CR. - -Argo Rollouts support the following actions: - -* Route percentage-based traffic for canary deployments. -* Forward incoming user traffic by using `Service` and `Ingress` resources to the correct application version. -* Use multiple mechanisms to collect analysis metrics to validate the deployment of a new version of an application. - -.Argo Rollouts resources -[options="header"] -|=== -|Name |Description -|`Rollout` |This CR enables the deployment of applications by using canary or blue-green deployment strategies. It replaces the in-built Kubernetes `Deployment` resource. -|`AnalysisRun` |This CR is used to perform an analysis and aggregate the results of analysis to guide the user toward the successful deployment delivery of an application. The `AnalysisRun` CR is an instance of the `AnalysisTemplate` CR. -|`AnalysisTemplate` |The `AnalysisTemplate` CR is a template file that provides instructions on how to query metrics. The result of these instructions is attached to a rollout in the form of the `AnalysisRun` CR. The `AnalysisTemplate` CR can be defined globally on the cluster or on a specific rollout. You can link a list of `AnalysisTemplate` to be used on replica sets by creating an `Experiment` custom resource. -|`Experiment` |The `Experiment` CR is used to run short-lived analysis on an application during its deployment to ensure the application is deployed correctly. The `Experiment` CR can be used independently or run as part of the `Rollout` CR. -|`Service` and `Ingress` | Argo Rollouts natively support routing traffic by services and ingresses by using the Service and Ingress controllers. -|`Route` and `VirtualService` |The OpenShift `Route` and {SMProductName} `VirtualService` resources are used to perform traffic splitting across different application versions. -|=== \ No newline at end of file +* *Kubernetes job metrics*: Argo Rollouts support the Kubernetes `Job` resource to run analysis on resource metrics. You can verify a successful deployment of an application based on the successful run of Kubernetes jobs. \ No newline at end of file diff --git a/modules/gitops-argo-rollouts-components.adoc b/modules/gitops-argo-rollouts-components.adoc new file mode 100644 index 000000000000..7d4afceb72b8 --- /dev/null +++ b/modules/gitops-argo-rollouts-components.adoc @@ -0,0 +1,20 @@ +// Module included in the following assemblies: +// +// * argo_rollouts/argo-rollouts-overview.adoc + +:_mod-docs-content-type: CONCEPT +[id="gitops-argo-rollouts-components_{context}"] += Argo Rollouts components + +Argo Rollouts consists of several components that enable users to practice progressive delivery in {OCP}. + +.Argo Rollouts components +[options="header"] +|=== +|Name |Description +|Argo Rollouts controller |The Argo Rollouts Controller is an alternative to the standard `Deployment` resource and coexists alongside it. This controller only responds to changes in the Argo Rollouts resources and manages the `Rollout` custom resource (CR). The Argo Rollouts Controller does not modify standard deployment resources. +|AnalysisRun controller |The AnalysisRun controller manages `AnalysisRun` and `AnalysisTemplate` CRs and performs analysis for them. It connects a rollout to the metrics provider and defines thresholds for metrics that determine if a deployment update is successful for your application. +|`Experiment controller` | The `Experiment` controller runs analysis on short-lived replica sets and manages the `Experiment` custom resource. You can also integrate this controller with the `Rollout` resource by specifying the `experiment` step in the canary deployment `strategy` field. +|`Service` and `Ingress` controllers |The Service controller manages the `Service` resources and the Ingress controller manages the `Ingress` resources modified by Argo Rollouts. These controllers inject additional metadata annotations in the application instances for traffic management. +|Argo Rollouts CLI and UI |Argo Rollouts supports an `oc/kubectl` plugin called Argo Rollouts CLI. You can use it to interact with resources, such as rollouts, analyses, and experiments, from the command line. It can perform operations, such as `pause`, `promote`, or `retry`. The Argo Rollouts CLI plugin can start a local web UI dashboard in the browser to enhance the experience of visualizing the Argo Rollouts resources. +|=== \ No newline at end of file diff --git a/modules/gitops-argo-rollouts-resources.adoc b/modules/gitops-argo-rollouts-resources.adoc new file mode 100644 index 000000000000..28a435e92b81 --- /dev/null +++ b/modules/gitops-argo-rollouts-resources.adoc @@ -0,0 +1,32 @@ +// Module included in the following assemblies: +// +// * argo_rollouts/argo-rollouts-overview.adoc + +:_mod-docs-content-type: CONCEPT +[id="gitops-argo-rollouts-resources_{context}"] += Argo Rollouts resources + +Argo Rollouts manage several resources to enable progressive delivery: + +* *Rollouts-specific resources*: For example, `Rollout`, `AnalysisRun`, or `Experiment`. +* *Kubernetes networking resources*: For example, `Service`, `Ingress`, or `Route` for network traffic shaping. Argo Rollouts integrate with these resources, which are referred to as traffic management. + +These resources are essential for customizing the deployment of applications through the `Rollout` CR. + +Argo Rollouts support the following actions: + +* Route percentage-based traffic for canary deployments. +* Forward incoming user traffic to the correct application version by using `Service` and `Ingress` resources. +* Use multiple mechanisms to collect analysis metrics to validate the deployment of a new version of an application. + +.Argo Rollouts resources +[options="header"] +|=== +|Name |Description +|`Rollout` |This custom resource (CR) enables the deployment of applications by using canary or blue-green deployment strategies. It replaces the in-built Kubernetes `Deployment` resource. +|`AnalysisRun` |You can use this CR to perform an analysis and aggregate the results to guide the user in successfully deploying and delivering the application. The `AnalysisRun` CR is an instance of the `AnalysisTemplate` CR. +|`AnalysisTemplate` |The `AnalysisTemplate` CR is a template that provides instructions about querying metrics. The result of these instructions is attached to a rollout in the form of the `AnalysisRun` CR. The `AnalysisTemplate` CR can be defined globally on the cluster or on a specific rollout. You can link a list of `AnalysisTemplate` resources to be used on replica sets by creating an `Experiment` custom resource. +|`Experiment` |The `Experiment` CR is used to run short-lived analysis on an application during its deployment to ensure the application is deployed correctly. You can use the `Experiment` CR independently or run it as a part of the `Rollout` CR. +|`Service` and `Ingress` | Argo Rollouts natively support routing traffic by services and ingresses by using the Service and Ingress controllers. +|`Route` and `VirtualService` |The OpenShift `Route` and {SMProductName} `VirtualService` resources are used to perform traffic splitting across different application versions. +|=== \ No newline at end of file diff --git a/modules/gitops-benefits-of-managing-secrets-using-sscsid-with-gitops-overview.adoc b/modules/gitops-benefits-of-managing-secrets-using-sscsid-with-gitops-overview.adoc new file mode 100644 index 000000000000..b9f908c441fa --- /dev/null +++ b/modules/gitops-benefits-of-managing-secrets-using-sscsid-with-gitops-overview.adoc @@ -0,0 +1,13 @@ +// Module is included in the following assemblies: +// +// * securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc + +:_mod-docs-content-type: CONCEPT +[id="gitops-benefits-of-managing-secrets-using-sscsid-with-gitops-overview_{context}"] += Benefits of managing secrets using the Secrets Store CSI driver with {gitops-shortname} + +Integrating the SSCSI driver with the {gitops-shortname} Operator provides the following benefits: + +* Enhance the security and efficiency of your {gitops-shortname} workflows. +* Facilitate the secure attachment of secrets into deployment pods as a volume. +* Ensure that sensitive information is accessed securely and efficiently. \ No newline at end of file diff --git a/modules/gitops-configuring-respect-rbac-using-gitops.adoc b/modules/gitops-configuring-respect-rbac-using-gitops.adoc index dbbd1115367c..dc160766d422 100644 --- a/modules/gitops-configuring-respect-rbac-using-gitops.adoc +++ b/modules/gitops-configuring-respect-rbac-using-gitops.adoc @@ -2,7 +2,7 @@ // // * declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc -:_mod-docs-content-type: PROCEDURE +:_mod-docs-content-type: CONCEPT [id="gitops-configuring-respect-rbac-using-gitops_{context}"] = Configuring respectRBAC using {gitops-title} @@ -14,87 +14,4 @@ You can enable the `respectRBAC` feature by creating an Argo CD instance through .Prerequisites -Ensure that you created and updated a namespace in the `Subscription` resource, so `Subscription` can host a cluster-scoped Argo CD instance. For more information, see "Using an Argo CD instance to manage cluster-scoped resources". - -[id="configuring-respectRBAC-using-the-cli_{context}"] -== Configuring respectRBAC using the CLI - -You can configure the `respectRBAC` feature by using the CLI. - -.Procedure - -. Create a YAML object file, for example, `argo-cd-resource.yaml`, to configure the `respectRBAC` feature: -+ -.Example `ArgoCD` YAML to create `respectRBAC` -[source,yaml] ----- -apiVersion: argoproj.io/v1beta1 -kind: ArgoCD -metadata: - name: example-argocd #<1> -spec: - controller: - respectRBAC: strict #<2> ----- -<1> Specify the name of the Argo CD instance. -<2> You can specify the value of the `.spec.controller.respectRBAC` key in the `ArgoCD` resource as `normal` or `strict`. Consider setting a value as `normal` to balance accuracy and speed as resource listing is a lightweight operation. Set the value as `strict` if Argo CD reports errors indicating that it cannot access resources when you set the value as `normal`. Setting `strict` increases the number of API calls to the server and it is more accurate compared to `normal` as Argo CD performs additional validations of RBAC resources to determine permissions. - -. Apply the changes to the YAML file by running the following command. -+ -[source,terminal] ----- -$ oc apply -f argocd-resource.yaml -n argo-cd-instance #<1> ----- -<1> Specify the name of the YAML file that includes the `ArgoCD` resource and the namespace that hosts `ArgoCD`. -+ -. Verify that the status of the `.status.phase` field is `Available` by running the following command: -+ -[source,terminal] ----- -$ oc get argocd -n -o jsonpath='{.status.phase}' #<1> ----- -<1> Replace `` with the name of your Argo CD instance for example, `example-argocd`. - -. Verify that the `resource.respectRBAC` parameter in the `ConfigMap` resource is updated successfully: -.. To retrieve the contents of the `argocd-cm` config map, run the following command: -+ -[source,terminal] ----- -$ oc get cm argocd-cm -n -o yaml ----- -.. Verify that the `argocd-cm` `ConfigMap` contains the `resource.respectRBAC` parameter and ensure its value is set to either `strict` or `normal`. - -[id="configuring-respectRBAC-using-the-web-UI_{context}"] -== Configuring respectRBAC by using the web console - -You can configure `respectRBAC` in the web console. - -.Procedure - -. Log in to the {OCP} web console. - -. In the *Administrator* perspective of the web console, click *Operators* -> *Installed Operators*. - -. Create or select the project where you want to install the user-defined Argo CD instance from the *Project* list. - -. Select *{gitops-title}* from the installed Operators list and click the *Argo CD* tab. - -. Configure the `respectRBAC` parameter in the *Argo CD* tab. -+ -[source,yaml] ----- -spec: - controller: - respectRBAC: strict ----- - -. Click *Create*. -+ -After successful installation, verify that the Argo CD instance is listed under the *Argo CD* tab and the *Status* is *Available*. - -. After the Argo CD instance is created, verify that the `resource.respectRBAC` parameter in the `ConfigMap` resource is updated successfully by completing the following steps. - -.. In the *Administrator* perspective, go to *Workload* -> *ConfigMaps*. -.. In the *Project* option, select the *Argo CD* namespace. -.. Select the `argocd-cm` config map. -.. Select the *YAML* tab to view the `resource.respectRBAC` parameter. \ No newline at end of file +Ensure that you created and updated a namespace in the `Subscription` resource, so `Subscription` can host a cluster-scoped Argo CD instance. For more information, see "Using an Argo CD instance to manage cluster-scoped resources". \ No newline at end of file diff --git a/modules/gitops-configuring-respect-rbac-using-the-cli.adoc b/modules/gitops-configuring-respect-rbac-using-the-cli.adoc new file mode 100644 index 000000000000..abd711e78d4e --- /dev/null +++ b/modules/gitops-configuring-respect-rbac-using-the-cli.adoc @@ -0,0 +1,52 @@ +// Module included in the following assembly: +// +// * declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc + +:_mod-docs-content-type: PROCEDURE +[id="configuring-respectRBAC-using-the-cli_{context}"] += Configuring respectRBAC using the CLI + +You can configure the `respectRBAC` feature by using the CLI. + +.Procedure + +. Create a YAML object file, for example, `argo-cd-resource.yaml`, to configure the `respectRBAC` feature: ++ +.Example `ArgoCD` YAML to create the `respectRBAC` feature +[source,yaml] +---- +apiVersion: argoproj.io/v1beta1 +kind: ArgoCD +metadata: + name: example-argocd #<1> +spec: + controller: + respectRBAC: strict #<2> +---- +<1> Specify the name of the Argo CD instance. +<2> You can specify the value of the `.spec.controller.respectRBAC` key in the `ArgoCD` resource as `normal` or `strict`. Consider setting the value as `normal` to balance accuracy and speed as resource listing is a lightweight operation. Set the value as `strict` if Argo CD reports errors indicating that it cannot access resources when you set the value as `normal`. Setting `strict` increases the number of API calls to the server and it is more accurate compared to `normal` as Argo CD performs additional validations of RBAC resources to determine permissions. + +. Apply the changes to the YAML file by running the following command: ++ +[source,terminal] +---- +$ oc apply -f argocd-resource.yaml -n #<1> +---- +<1> Replace `argocd-resource.yaml` with the name of the YAML file that defines the `ArgoCD` resource and `` with the namespace that hosts `ArgoCD`. ++ +. Verify that the status of the `.status.phase` field is `Available` by running the following command: ++ +[source,terminal] +---- +$ oc get argocd -n -o jsonpath='{.status.phase}' #<1> +---- +<1> Replace `` with the name of your Argo CD instance, for example, `example-argocd`. + +. Verify that the `resource.respectRBAC` parameter in the `ConfigMap` resource is updated successfully: +.. To retrieve the contents of the `argocd-cm` config map, run the following command: ++ +[source,terminal] +---- +$ oc get cm argocd-cm -n -o yaml +---- +.. Verify that the `argocd-cm` `ConfigMap` contains the `resource.respectRBAC` parameter and ensure its value is set to either `strict` or `normal`. \ No newline at end of file diff --git a/modules/gitops-configuring-respect-rbac-using-the-web-console.adoc b/modules/gitops-configuring-respect-rbac-using-the-web-console.adoc new file mode 100644 index 000000000000..b1827e48fe7a --- /dev/null +++ b/modules/gitops-configuring-respect-rbac-using-the-web-console.adoc @@ -0,0 +1,39 @@ +// Module included in the following assembly: +// +// * declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc + +:_mod-docs-content-type: PROCEDURE +[id="configuring-respectRBAC-using-the-web-UI_{context}"] += Configuring respectRBAC by using the web console + +You can configure `respectRBAC` in the web console. + +.Procedure + +. Log in to the {OCP} web console. + +. In the *Administrator* perspective of the web console, click *Operators* -> *Installed Operators*. + +. Create or select the project where you want to install the user-defined Argo CD instance from the *Project* list. + +. Select *{gitops-title}* from the installed Operators list and click the *Argo CD* tab. + +. Configure the `respectRBAC` parameter in the *Argo CD* tab. ++ +[source,yaml] +---- +spec: + controller: + respectRBAC: strict +---- + +. Click *Create*. ++ +After successful installation, verify that the Argo CD instance is listed under the *Argo CD* tab and the *Status* is *Available*. + +. After the Argo CD instance is created, verify that the `resource.respectRBAC` parameter in the `ConfigMap` resource is updated successfully by completing the following steps. + +.. In the *Administrator* perspective, go to *Workload* -> *ConfigMaps*. +.. In the *Project* option, select the *Argo CD* namespace. +.. Select the `argocd-cm` config map. +.. Select the *YAML* tab to view the `resource.respectRBAC` parameter. \ No newline at end of file diff --git a/modules/gitops-create-configure-aggregated-user-defined-permissions.adoc b/modules/gitops-create-configure-aggregated-user-defined-permissions.adoc new file mode 100644 index 000000000000..37a046b81d35 --- /dev/null +++ b/modules/gitops-create-configure-aggregated-user-defined-permissions.adoc @@ -0,0 +1,15 @@ +// Module included in the following assembly: +// +// * declarative_clusterconfig/customizing-permissions-by-creating-aggregated-cluster-roles.adoc + +:_mod-docs-content-type: PROCEDURE +[id="create-configure-aggregated-user-defined-permissions_{context}"] += Create user-defined cluster roles and configure user-defined permissions + +To configure user-defined permissions into the `--argocd-application-controller-admin` cluster role and the aggregated cluster role, you must create one or more user-defined cluster roles with the `argocd/aggregate-to-admin: 'true'` label and then configure the user-defined permissions for Application Controller. + +[NOTE] +==== +* The aggregated cluster role inherits permissions from the `--argocd-application-controller-admin` and `--argocd-application-controller-view` cluster roles. +* The `--argocd-application-controller-admin` cluster role inherits permissions from the user-defined cluster role. +==== \ No newline at end of file diff --git a/modules/gitops-creating-aggregated-cluster-roles.adoc b/modules/gitops-creating-aggregated-cluster-roles.adoc index 938453e81f8d..770df56ff57e 100644 --- a/modules/gitops-creating-aggregated-cluster-roles.adoc +++ b/modules/gitops-creating-aggregated-cluster-roles.adoc @@ -9,24 +9,4 @@ The process of creating aggregated cluster roles consists of the following procedures: . Enabling the creation of aggregated cluster roles -. Creating user-defined cluster roles and configuring user-defined permissions for Application Controller - -[id="enable-creation-of-aggregated-cluster-roles_{context}"] -== Enable the creation of aggregated cluster roles - -You can enable the creation of aggregated cluster roles by setting the value of the `.spec.aggregatedClusterRoles` field to `true` in the Argo CD custom resource (CR). When you enable the creation of aggregated cluster roles, the {gitops-title} Operator takes the following actions: - -* Creates an `--argocd-application-controller` aggregated cluster role with a predefined `aggregationRule` field by default. -* Creates a corresponding cluster role binding and manages it. -* Creates and manages `view` and `admin` cluster roles for Application Controller to add user-defined permissions into the aggregated cluster role. - -[id="create-configure-aggregated-user-defined-permissions_{context}"] -== Create user-defined cluster roles and configure user-defined permissions - -To configure user-defined permissions into the `--argocd-application-controller-admin` cluster role and aggregated cluster role, you must create one or more user-defined cluster roles with the `argocd/aggregate-to-admin: 'true'` label and then configure the user-defined permissions for Application Controller. - -[NOTE] -==== -* The aggregated cluster role inherits permissions from the `--argocd-application-controller-admin` and `--argocd-application-controller-view` cluster roles. -* The `--argocd-application-controller-admin` cluster role inherits permissions from the user-defined cluster role. -==== \ No newline at end of file +. Creating user-defined cluster roles and configuring user-defined permissions for Application Controller \ No newline at end of file diff --git a/modules/gitops-enable-creation-of-aggregated-cluster-roles.adoc b/modules/gitops-enable-creation-of-aggregated-cluster-roles.adoc new file mode 100644 index 000000000000..9b00931100b0 --- /dev/null +++ b/modules/gitops-enable-creation-of-aggregated-cluster-roles.adoc @@ -0,0 +1,17 @@ +// Module included in the following assembly: +// +// * declarative_clusterconfig/customizing-permissions-by-creating-aggregated-cluster-roles.adoc + +:_mod-docs-content-type: PROCEDURE +[id="enable-creation-of-aggregated-cluster-roles_{context}"] += Enable the creation of aggregated cluster roles + +When you enable the creation of aggregated cluster roles, the {gitops-title} Operator takes the following actions: + +* Creates an `--argocd-application-controller` aggregated cluster role with a predefined `aggregationRule` field by default. +* Creates a corresponding cluster role binding and manages it. +* Creates and manages `view` and `admin` cluster roles for Application Controller to add user-defined permissions into the aggregated cluster role. + +.Procedure + +. Enable the creation of aggregated cluster roles by setting the value of the `.spec.aggregatedClusterRoles` field to `true` in the Argo CD custom resource (CR). \ No newline at end of file diff --git a/modules/gitops-install-olm-operators-using-gitops.adoc b/modules/gitops-install-olm-operators-using-gitops.adoc new file mode 100644 index 000000000000..78377a99fa84 --- /dev/null +++ b/modules/gitops-install-olm-operators-using-gitops.adoc @@ -0,0 +1,13 @@ +// Module included in the following assembly: +// +// * declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc + +:_mod-docs-content-type: CONCEPT +[id="gitops-install-olm-operators-using-gitops_{context}"] += Install OLM Operators using {gitops-title} + +{gitops-title} with cluster configurations manages specific cluster-scoped resources and takes care of installing cluster Operators or any namespace-scoped OLM Operators. + +Consider a case where as a cluster administrator, you have to install an OLM Operator such as Tekton. You use the {OCP} web console to manually install a Tekton Operator or the OpenShift CLI to manually install a Tekton subscription and Tekton Operator group on your cluster. + +{gitops-title} places your Kubernetes resources in your Git repository. As a cluster administrator, use {gitops-title} to manage and automate the installation of other OLM Operators without any manual procedures. For example, after you place the Tekton subscription in your Git repository by using {gitops-title}, the {gitops-title} automatically takes this Tekton subscription from your Git repository and installs the Tekton Operator on your cluster. \ No newline at end of file diff --git a/modules/gitops-installing-cluster-scoped-operators.adoc b/modules/gitops-installing-cluster-scoped-operators.adoc new file mode 100644 index 000000000000..d2c03facc23d --- /dev/null +++ b/modules/gitops-installing-cluster-scoped-operators.adoc @@ -0,0 +1,27 @@ +// Module included in the following assembly: +// +// * declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc + +:_mod-docs-content-type: CONCEPT +[id="installing-cluster-scoped-operators_{context}"] += Installing cluster-scoped Operators + +Operator Lifecycle Manager (OLM) uses a default `global-operators` Operator group in the `openshift-operators` namespace for cluster-scoped Operators. Hence you do not have to manage the `OperatorGroup` resource in your Gitops repository. However, for namespace-scoped Operators, you must manage the `OperatorGroup` resource in that namespace. + +To install cluster-scoped Operators, create and place the `Subscription` resource of the required Operator in your Git repository. + +.Example: Grafana Operator subscription + +[source,yaml] +---- +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: grafana +spec: + channel: v4 + installPlanApproval: Automatic + name: grafana-operator + source: redhat-operators + sourceNamespace: openshift-marketplace +---- \ No newline at end of file diff --git a/modules/gitops-installing-olm-operators-using-gitops.adoc b/modules/gitops-installing-olm-operators-using-gitops.adoc deleted file mode 100644 index e07648da1b69..000000000000 --- a/modules/gitops-installing-olm-operators-using-gitops.adoc +++ /dev/null @@ -1,82 +0,0 @@ -// Module included in the following assembly: -// -// * declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc - -:_mod-docs-content-type: PROCEDURE -[id="gitops-installing-olm-operators-using-gitops_{context}"] -= Installing OLM Operators using {gitops-title} - -{gitops-title} with cluster configurations manages specific cluster-scoped resources and takes care of installing cluster Operators or any namespace-scoped OLM Operators. - -Consider a case where as a cluster administrator, you have to install an OLM Operator such as Tekton. You use the {OCP} web console to manually install a Tekton Operator or the OpenShift CLI to manually install a Tekton subscription and Tekton Operator group on your cluster. - -{gitops-title} places your Kubernetes resources in your Git repository. As a cluster administrator, use {gitops-title} to manage and automate the installation of other OLM Operators without any manual procedures. For example, after you place the Tekton subscription in your Git repository by using {gitops-title}, the {gitops-title} automatically takes this Tekton subscription from your Git repository and installs the Tekton Operator on your cluster. - -[id="installing-cluster-scoped-operators_{context}"] -== Installing cluster-scoped Operators - -Operator Lifecycle Manager (OLM) uses a default `global-operators` Operator group in the `openshift-operators` namespace for cluster-scoped Operators. Hence you do not have to manage the `OperatorGroup` resource in your Gitops repository. However, for namespace-scoped Operators, you must manage the `OperatorGroup` resource in that namespace. - -To install cluster-scoped Operators, create and place the `Subscription` resource of the required Operator in your Git repository. - -.Example: Grafana Operator subscription - -[source,yaml] ----- -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: grafana -spec: - channel: v4 - installPlanApproval: Automatic - name: grafana-operator - source: redhat-operators - sourceNamespace: openshift-marketplace ----- - -[id="installing-namespace-scoped-operators_{context}"] -== Installing namepace-scoped Operators - -To install namespace-scoped Operators, create and place the `Subscription` and `OperatorGroup` resources of the required Operator in your Git repository. - -.Example: Ansible Automation Platform Resource Operator - -[source,yaml] ----- -# ... -apiVersion: v1 -kind: Namespace -metadata: - labels: - openshift.io/cluster-monitoring: "true" - name: ansible-automation-platform -# ... -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: ansible-automation-platform-operator - namespace: ansible-automation-platform -spec: - targetNamespaces: - - ansible-automation-platform -# ... -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: ansible-automation-platform - namespace: ansible-automation-platform -spec: - channel: patch-me - installPlanApproval: Automatic - name: ansible-automation-platform-operator - source: redhat-operators - sourceNamespace: openshift-marketplace -# ... ----- - -[IMPORTANT] -==== -When deploying multiple Operators using {gitops-title}, you must create only a single Operator group in the corresponding namespace. If more than one Operator group exists in a single namespace, any CSV created in that namespace transition to a `failure` state with the `TooManyOperatorGroups` reason. After the number of Operator groups in their corresponding namespaces reaches one, all the previous `failure` state CSVs transition to `pending` state. You must manually approve the pending install plan to complete the Operator installation. -==== - diff --git a/modules/gitops-managing-secrets-using-sscsid-with-gitops-overview.adoc b/modules/gitops-managing-secrets-using-sscsid-with-gitops-overview.adoc index bdd5fb83a7e5..b785c3c8e9da 100644 --- a/modules/gitops-managing-secrets-using-sscsid-with-gitops-overview.adoc +++ b/modules/gitops-managing-secrets-using-sscsid-with-gitops-overview.adoc @@ -11,55 +11,4 @@ Some applications need sensitive information, such as passwords and usernames wh [IMPORTANT] ==== Anyone who is authorized to create a pod in a namespace can use that RBAC to read any secret in that namespace. With the SSCSI Driver Operator, you can use an external secrets store to store and provide sensitive information to pods securely. -==== - -The process of integrating the {OCP} SSCSI driver with the {gitops-shortname} Operator consists of the following procedures: - -. xref:../securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc#gitops-storing-aws-secret-manager-resources-in-gitops-repository_managing-secrets-securely-using-sscsid-with-gitops[Storing AWS Secrets Manager resources in {gitops-shortname} repository] -. xref:../securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc#gitops-configuring-sscsi-driver-to-mount-secrets-from-aws-secrets-manager_managing-secrets-securely-using-sscsid-with-gitops[Configuring SSCSI driver to mount secrets from AWS Secrets Manager] -. xref:../securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc#gitops-configuring-gitops-managed-resources-to-use-mounted-secrets_managing-secrets-securely-using-sscsid-with-gitops[Configuring {gitops-shortname} managed resources to use mounted secrets] - -[id="benefits_{context}"] -== Benefits -Integrating the SSCSI driver with the {gitops-shortname} Operator provides the following benefits: - -* Enhance the security and efficiency of your {gitops-shortname} workflows -* Facilitate the secure attachment of secrets into deployment pods as a volume -* Ensure that sensitive information is accessed securely and efficiently - -[id="secrets-store-providers_{context}"] -== Secrets store providers -The following secrets store providers are available for use with the Secrets Store CSI Driver Operator: - -* AWS Secrets Manager -* AWS Systems Manager Parameter Store -* Microsoft Azure Key Vault - -As an example, consider that you are using AWS Secrets Manager as your secrets store provider with the SSCSI Driver Operator. The following example shows the directory structure in {gitops-shortname} repository that is ready to use the secrets from AWS Secrets Manager: - -.Example directory structure in {gitops-shortname} repository ----- -├── config -│   ├── argocd -│   │   ├── argo-app.yaml -│   │   ├── secret-provider-app.yaml <3> -│   │   ├── ... -│   └── sscsid <1> -│   └── aws-provider.yaml <2> -├── environments -│   ├── dev <4> -│   │   ├── apps -│   │   │   └── app-taxi <5> -│   │   │   ├── ... -│   │   ├── credentialsrequest-dir-aws <6> -│   │   └── env -│   │   ├── ... -│   ├── new-env -│   │   ├── ... ----- -<1> Directory that stores the `aws-provider.yaml` file. -<2> Configuration file that installs the AWS Secrets Manager provider and deploys resources for it. -<3> Configuration file that creates an application and deploys resources for AWS Secrets Manager. -<4> Directory that stores the deployment pod and credential requests. -<5> Directory that stores the `SecretProviderClass` resources to define your secrets store provider. -<6> Folder that stores the `credentialsrequest.yaml` file. This file contains the configuration for the credentials request to mount a secret to the deployment pod. \ No newline at end of file +==== \ No newline at end of file diff --git a/modules/gitops-namespace-scoped-operators.adoc b/modules/gitops-namespace-scoped-operators.adoc new file mode 100644 index 000000000000..97485eb6e688 --- /dev/null +++ b/modules/gitops-namespace-scoped-operators.adoc @@ -0,0 +1,49 @@ +// Module included in the following assembly: +// +// * declarative_clusterconfig/configuring-an-openshift-cluster-by-deploying-an-application-with-cluster-configurations.adoc + +:_mod-docs-content-type: CONCEPT +[id="installing-namespace-scoped-operators_{context}"] += Installing namepace-scoped Operators + +To install namespace-scoped Operators, create and place the `Subscription` and `OperatorGroup` resources of the required Operator in your Git repository. + +.Example: Ansible Automation Platform Resource Operator + +[source,yaml] +---- +# ... +apiVersion: v1 +kind: Namespace +metadata: + labels: + openshift.io/cluster-monitoring: "true" + name: ansible-automation-platform +# ... +apiVersion: operators.coreos.com/v1 +kind: OperatorGroup +metadata: + name: ansible-automation-platform-operator + namespace: ansible-automation-platform +spec: + targetNamespaces: + - ansible-automation-platform +# ... +apiVersion: operators.coreos.com/v1alpha1 +kind: Subscription +metadata: + name: ansible-automation-platform + namespace: ansible-automation-platform +spec: + channel: patch-me + installPlanApproval: Automatic + name: ansible-automation-platform-operator + source: redhat-operators + sourceNamespace: openshift-marketplace +# ... +---- + +[IMPORTANT] +==== +When deploying multiple Operators using {gitops-title}, you must create only a single Operator group in the corresponding namespace. If more than one Operator group exists in a single namespace, any CSV created in that namespace transition to a `failure` state with the `TooManyOperatorGroups` reason. After the number of Operator groups in their corresponding namespaces reaches one, all the previous `failure` state CSVs transition to `pending` state. You must manually approve the pending install plan to complete the Operator installation. +==== \ No newline at end of file diff --git a/modules/gitops-secrets-store-providers.adoc b/modules/gitops-secrets-store-providers.adoc new file mode 100644 index 000000000000..10e4be7b1294 --- /dev/null +++ b/modules/gitops-secrets-store-providers.adoc @@ -0,0 +1,42 @@ +// Module is included in the following assemblies: +// +// * securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc + +:_mod-docs-content-type: CONCEPT +[id="gitops-secrets-store-providers_{context}"] += Secrets store providers + +The following secrets store providers are available for use with the Secrets Store CSI Driver Operator: + +* AWS Secrets Manager +* AWS Systems Manager Parameter Store +* Microsoft Azure Key Vault + +As an example, consider that you are using AWS Secrets Manager as your secrets store provider with the SSCSI Driver Operator. The following example shows the directory structure in a {gitops-shortname} repository that is ready to use the secrets from AWS Secrets Manager: + +.Example directory structure in {gitops-shortname} repository +---- +├── config +│   ├── argocd +│   │   ├── argo-app.yaml +│   │   ├── secret-provider-app.yaml <3> +│   │   ├── ... +│   └── sscsid <1> +│   └── aws-provider.yaml <2> +├── environments +│   ├── dev <4> +│   │   ├── apps +│   │   │   └── app-taxi <5> +│   │   │   ├── ... +│   │   ├── credentialsrequest-dir-aws <6> +│   │   └── env +│   │   ├── ... +│   ├── new-env +│   │   ├── ... +---- +<1> Directory that stores the `aws-provider.yaml` file. +<2> Configuration file that installs the AWS Secrets Manager provider and deploys resources for it. +<3> Configuration file that creates an application and deploys resources for AWS Secrets Manager. +<4> Directory that stores the deployment pod and credential requests. +<5> Directory that stores the `SecretProviderClass` resources to define your secrets store provider. +<6> Folder that stores the `credentialsrequest.yaml` file. This file contains the configuration for the credentials request to mount a secret to the deployment pod. \ No newline at end of file diff --git a/securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc b/securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc index b5f61c3df86f..61b3eb6080b9 100644 --- a/securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc +++ b/securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc @@ -11,6 +11,18 @@ This guide walks you through the process of integrating the Secrets Store Contai // Overview of managing secrets using Secrets Store CSI driver with GitOps include::modules/gitops-managing-secrets-using-sscsid-with-gitops-overview.adoc[leveloffset=+1] +The process of integrating the {OCP} SSCSI driver with the {gitops-shortname} Operator consists of the following procedures: + +. xref:../securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc#gitops-storing-aws-secret-manager-resources-in-gitops-repository_managing-secrets-securely-using-sscsid-with-gitops[Storing AWS Secrets Manager resources in {gitops-shortname} repository] +. xref:../securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc#gitops-configuring-sscsi-driver-to-mount-secrets-from-aws-secrets-manager_managing-secrets-securely-using-sscsid-with-gitops[Configuring SSCSI driver to mount secrets from AWS Secrets Manager] +. xref:../securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.adoc#gitops-configuring-gitops-managed-resources-to-use-mounted-secrets_managing-secrets-securely-using-sscsid-with-gitops[Configuring {gitops-shortname} managed resources to use mounted secrets] + +// Benefits of managing secrets using Secrets Store CSI driver with {gitops-shortname} +include::modules/gitops-benefits-of-managing-secrets-using-sscsid-with-gitops-overview.adoc[leveloffset=+2] + +// Secrets store providers +include::modules/gitops-secrets-store-providers.adoc[leveloffset=+2] + [id="prerequisites_{context}"] == Prerequisites * You have access to the cluster with `cluster-admin` privileges.