From d39903df7c95f585cc650d6908037d010ca05219 Mon Sep 17 00:00:00 2001 From: Max Bridges Date: Fri, 7 Nov 2025 16:01:12 -0500 Subject: [PATCH] remove unused assemblies --- ai_workloads/kueue/troubleshooting.adoc | 31 -- architecture/argocd.adoc | 25 -- .../installing/data-mover-intro.adoc | 29 -- ...sing-data-mover-for-csi-snapshots-doc.adoc | 276 ------------------ ...installing-aws-network-customizations.adoc | 133 --------- ...stalling-azure-network-customizations.adoc | 101 ------- ...installing-gcp-network-customizations.adoc | 137 --------- machine_management/adding-rhel-compute.adoc | 52 ---- machine_management/more-rhel-compute.adoc | 48 --- .../ingress-operator.adoc | 2 - .../certificate-types-descriptions-index.adoc | 13 - ...itelisted-IP-addresses-for-sre-access.adoc | 39 --- .../zero-trust-manager-features.adoc | 18 -- .../osd-persistent-storage-aws-efs-csi.adoc | 90 ------ welcome/about-hcp.adoc | 102 ------- .../cloud-experts-rosa-hcp-sts-explained.adoc | 129 -------- 16 files changed, 1225 deletions(-) delete mode 100644 ai_workloads/kueue/troubleshooting.adoc delete mode 100644 architecture/argocd.adoc delete mode 100644 backup_and_restore/application_backup_and_restore/installing/data-mover-intro.adoc delete mode 100644 backup_and_restore/application_backup_and_restore/installing/oadp-using-data-mover-for-csi-snapshots-doc.adoc delete mode 100644 installing/installing_aws/ipi/installing-aws-network-customizations.adoc delete mode 100644 installing/installing_azure/ipi/installing-azure-network-customizations.adoc delete mode 100644 installing/installing_gcp/installing-gcp-network-customizations.adoc delete mode 100644 machine_management/adding-rhel-compute.adoc delete mode 100644 machine_management/more-rhel-compute.adoc delete mode 100644 networking/networking/networking_operators/ingress-operator.adoc delete mode 100644 security/certificate_types_descriptions/certificate-types-descriptions-index.adoc delete mode 100644 security/rh-required-whitelisted-IP-addresses-for-sre-access.adoc delete mode 100644 security/zero_trust_workload_identity_manager/zero-trust-manager-features.adoc delete mode 100644 storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc delete mode 100644 welcome/about-hcp.adoc delete mode 100644 welcome/cloud-experts-rosa-hcp-sts-explained.adoc diff --git a/ai_workloads/kueue/troubleshooting.adoc b/ai_workloads/kueue/troubleshooting.adoc deleted file mode 100644 index 6dab1f8bec4f..000000000000 --- a/ai_workloads/kueue/troubleshooting.adoc +++ /dev/null @@ -1,31 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -include::_attributes/common-attributes.adoc[] -[id="troubleshooting"] -= Troubleshooting -:context: troubleshooting - -toc::[] - -// commented out - note for TS docs - -// Troubleshooting installations -// Verifying node health -// Troubleshooting network issues -// Troubleshooting Operator issues -// Investigating pod issues -// Diagnosing CLI issues -//// -Troubleshooting Jobs -Troubleshooting the status of a Job - -Troubleshooting Queues -Troubleshooting the status of a LocalQueue or ClusterQueue - -Troubleshooting Provisioning Request in Kueue -Troubleshooting the status of a Provisioning Request in Kueue - -Troubleshooting Pods -Troubleshooting the status of a Pod or group of Pods - -Troubleshooting delete ClusterQueue -//// diff --git a/architecture/argocd.adoc b/architecture/argocd.adoc deleted file mode 100644 index ede48546c22b..000000000000 --- a/architecture/argocd.adoc +++ /dev/null @@ -1,25 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="argocd"] -= Using ArgoCD with {product-title} -include::_attributes/common-attributes.adoc[] - -:context: argocd - -toc::[] - -[id="argocd-what"] -== What does ArgoCD do? - -ArgoCD is a declarative continuous delivery tool that leverages GitOps to maintain cluster resources. ArgoCD is implemented as a controller that continuously monitors application definitions and configurations defined in a Git repository and compares the specified state of those configurations with their live state on the cluster. Configurations that deviate from their specified state in the Git repository are classified as OutOfSync. ArgoCD reports these differences and allows administrators to automatically or manually resync configurations to the defined state. - -ArgoCD enables you to deliver global custom resources, like the resources that are used to configure {product-title} clusters. - -[id="argocd-support"] -== Statement of support - -Red Hat does not provide support for this tool. To obtain support for ArgoCD, see link:https://argoproj.github.io/argo-cd/SUPPORT/[Support] in the ArgoCD documentation. - -[id="argocd-documentation"] -== ArgoCD documentation - -For more information about using ArgoCD, see the link:https://argoproj.github.io/argo-cd/[ArgoCD documentation]. diff --git a/backup_and_restore/application_backup_and_restore/installing/data-mover-intro.adoc b/backup_and_restore/application_backup_and_restore/installing/data-mover-intro.adoc deleted file mode 100644 index 38a88f9dc4b7..000000000000 --- a/backup_and_restore/application_backup_and_restore/installing/data-mover-intro.adoc +++ /dev/null @@ -1,29 +0,0 @@ -:_mod-docs-content-type: CONCEPT -[id="oadp-data-mover-intro"] -= OADP Data Mover Introduction -include::_attributes/common-attributes.adoc[] -:context: data-mover - -toc::[] - -OADP Data Mover allows you to restore stateful applications from the store if a failure, accidental deletion, or corruption of the cluster occurs. - -:FeatureName: The OADP 1.2 Data Mover -include::snippets/technology-preview.adoc[leveloffset=+1] - -* You can use OADP Data Mover to back up Container Storage Interface (CSI) volume snapshots to a remote object store. See xref:../../../backup_and_restore/application_backup_and_restore/installing/oadp-using-data-mover-for-csi-snapshots-doc.adoc#oadp-using-data-mover-for-csi-snapshots-doc[Using Data Mover for CSI snapshots]. - -* You can use OADP 1.2 Data Mover to back up and restore application data for clusters that use CephFS, CephRBD, or both. See xref:../../../backup_and_restore/application_backup_and_restore/installing/oadp-using-data-mover-for-csi-snapshots-doc.adoc#oadp-using-data-mover-for-csi-snapshots-doc[Using OADP 1.2 Data Mover with Ceph storage]. - -include::snippets/snip-post-mig-hook[] - -[id="oadp-data-mover-prerequisites"] -== OADP Data Mover prerequisites - -* You have a stateful application running in a separate namespace. - -* You have installed the OADP Operator by using Operator Lifecycle Manager (OLM). - -* You have created an appropriate `VolumeSnapshotClass` and `StorageClass`. - -* You have installed the VolSync operator using OLM. diff --git a/backup_and_restore/application_backup_and_restore/installing/oadp-using-data-mover-for-csi-snapshots-doc.adoc b/backup_and_restore/application_backup_and_restore/installing/oadp-using-data-mover-for-csi-snapshots-doc.adoc deleted file mode 100644 index 86966993a943..000000000000 --- a/backup_and_restore/application_backup_and_restore/installing/oadp-using-data-mover-for-csi-snapshots-doc.adoc +++ /dev/null @@ -1,276 +0,0 @@ -// Module included in the following assemblies: -// -// * backup_and_restore/application_backup_and_restore/backing_up_and_restoring/backing-up-applications.adoc - -:_mod-docs-content-type: PROCEDURE -[id="oadp-using-data-mover-for-csi-snapshots-doc"] -= Using Data Mover for CSI snapshots -include::_attributes/common-attributes.adoc[] -:context: backing-up-applications - -toc::[] - -:FeatureName: Data Mover for CSI snapshots - -The OADP Data Mover enables customers to back up Container Storage Interface (CSI) volume snapshots to a remote object store. When Data Mover is enabled, you can restore stateful applications, using CSI volume snapshots pulled from the object store if a failure, accidental deletion, or corruption of the cluster occurs. - -The Data Mover solution uses the Restic option of VolSync. - -Data Mover supports backup and restore of CSI volume snapshots only. - -In OADP 1.2 Data Mover `VolumeSnapshotBackups` (VSBs) and `VolumeSnapshotRestores` (VSRs) are queued using the VolumeSnapshotMover (VSM). The VSM's performance is improved by specifying a concurrent number of VSBs and VSRs simultaneously `InProgress`. After all async plugin operations are complete, the backup is marked as complete. - - -:FeatureName: The OADP 1.2 Data Mover -include::snippets/technology-preview.adoc[leveloffset=+1] - -[NOTE] -==== -Red Hat recommends that customers who use OADP 1.2 Data Mover in order to back up and restore ODF CephFS volumes, upgrade or install {product-title} version 4.12 or later for improved performance. OADP Data Mover can leverage CephFS shallow volumes in {product-title} version 4.12 or later, which based on our testing, can improve the performance of backup times. - -* https://issues.redhat.com/browse/RHSTOR-4287[CephFS ROX details] -//* https://github.com/ceph/ceph-csi/blob/devel/docs/cephfs-snapshot-backed-volumes.md[Provisioning and mounting CephFS snapshot-backed volumes] - - -//For more information about OADP 1.2 with CephS [name of topic], see ___. - -==== - -.Prerequisites - -* You have verified that the `StorageClass` and `VolumeSnapshotClass` custom resources (CRs) support CSI. - -* You have verified that only one `VolumeSnapshotClass` CR has the annotation `snapshot.storage.kubernetes.io/is-default-class: "true"`. -+ -[NOTE] -==== -In {product-title} version 4.12 or later, verify that this is the only default `VolumeSnapshotClass`. -==== - -* You have verified that `deletionPolicy` of the `VolumeSnapshotClass` CR is set to `Retain`. - -* You have verified that only one `StorageClass` CR has the annotation `storageclass.kubernetes.io/is-default-class: "true"`. - -* You have included the label `{velero-domain}/csi-volumesnapshot-class: "true"` in your `VolumeSnapshotClass` CR. - -* You have verified that the `OADP namespace` has the annotation `oc annotate --overwrite namespace/openshift-adp volsync.backube/privileged-movers="true"`. -+ -[NOTE] -==== -In OADP 1.2 the `privileged-movers` setting is not required in most scenarios. The restoring container permissions should be adequate for the Volsync copy. In some user scenarios, there may be permission errors that the `privileged-mover`= `true` setting should resolve. -==== - -* You have installed the VolSync Operator by using the Operator Lifecycle Manager (OLM). -+ -[NOTE] -==== -The VolSync Operator is required for using OADP Data Mover. -==== - -* You have installed the OADP operator by using OLM. -+ --- -include::snippets/xfs-filesystem-snippet.adoc[] --- - -.Procedure - -. Configure a Restic secret by creating a `.yaml` file as following: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: - namespace: openshift-adp -type: Opaque -stringData: - RESTIC_PASSWORD: ----- -+ -[NOTE] -==== -By default, the Operator looks for a secret named `dm-credential`. If you are using a different name, you need to specify the name through a Data Protection Application (DPA) CR using `dpa.spec.features.dataMover.credentialName`. -==== - -. Create a DPA CR similar to the following example. The default plugins include CSI. -+ -.Example Data Protection Application (DPA) CR -[source,yaml] ----- -apiVersion: oadp.openshift.io/v1alpha1 -kind: DataProtectionApplication -metadata: - name: velero-sample - namespace: openshift-adp -spec: - backupLocations: - - velero: - config: - profile: default - region: us-east-1 - credential: - key: cloud - name: cloud-credentials - default: true - objectStorage: - bucket: - prefix: - provider: aws - configuration: - restic: - enable: - velero: - itemOperationSyncFrequency: "10s" - defaultPlugins: - - openshift - - aws - - csi - - vsm - features: - dataMover: - credentialName: restic-secret - enable: true - maxConcurrentBackupVolumes: "3" <1> - maxConcurrentRestoreVolumes: "3" <2> - pruneInterval: "14" <3> - volumeOptions: <4> - sourceVolumeOptions: - accessMode: ReadOnlyMany - cacheAccessMode: ReadWriteOnce - cacheCapacity: 2Gi - destinationVolumeOptions: - storageClass: other-storageclass-name - cacheAccessMode: ReadWriteMany - snapshotLocations: - - velero: - config: - profile: default - region: us-west-2 - provider: aws - ----- -<1> Optional: Specify the upper limit of the number of snapshots allowed to be queued for backup. The default value is `10`. -<2> Optional: Specify the upper limit of the number of snapshots allowed to be queued for restore. The default value is `10`. -<3> Optional: Specify the number of days between running Restic pruning on the repository. The prune operation repacks the data to free space, but it can also generate significant I/O traffic as a part of the process. Setting this option allows a trade-off between storage consumption, from no longer referenced data, and access costs. -<4> Optional: Specify VolumeSync volume options for backup and restore. - -+ -The OADP Operator installs two custom resource definitions (CRDs), `VolumeSnapshotBackup` and `VolumeSnapshotRestore`. -+ -.Example `VolumeSnapshotBackup` CRD -[source,yaml] ----- -apiVersion: datamover.oadp.openshift.io/v1alpha1 -kind: VolumeSnapshotBackup -metadata: - name: - namespace: <1> -spec: - volumeSnapshotContent: - name: - protectedNamespace: <2> - resticSecretRef: - name: ----- -<1> Specify the namespace where the volume snapshot exists. -<2> Specify the namespace where the OADP Operator is installed. The default is `openshift-adp`. -+ -.Example `VolumeSnapshotRestore` CRD -[source,yaml] ----- -apiVersion: datamover.oadp.openshift.io/v1alpha1 -kind: VolumeSnapshotRestore -metadata: - name: - namespace: <1> -spec: - protectedNamespace: <2> - resticSecretRef: - name: - volumeSnapshotMoverBackupRef: - sourcePVCData: - name: - size: - resticrepository: - volumeSnapshotClassName: ----- -<1> Specify the namespace where the volume snapshot exists. -<2> Specify the namespace where the OADP Operator is installed. The default is `openshift-adp`. - -. You can back up a volume snapshot by performing the following steps: - -.. Create a backup CR: -+ -[source,yaml] ----- -apiVersion: velero.io/v1 -kind: Backup -metadata: - name: - namespace: <1> -spec: - includedNamespaces: - - <2> - storageLocation: velero-sample-1 ----- -<1> Specify the namespace where the Operator is installed. The default namespace is `openshift-adp`. -<2> Specify the application namespace or namespaces to be backed up. - -.. Wait up to 10 minutes and check whether the `VolumeSnapshotBackup` CR status is `Completed` by entering the following commands: -+ -[source,terminal] ----- -$ oc get vsb -n ----- -+ -[source,terminal] ----- -$ oc get vsb -n -o jsonpath="{.status.phase}" ----- -+ -A snapshot is created in the object store was configured in the DPA. -+ -[NOTE] -==== -If the status of the `VolumeSnapshotBackup` CR becomes `Failed`, refer to the Velero logs for troubleshooting. -==== - -. You can restore a volume snapshot by performing the following steps: - -.. Delete the application namespace and the `VolumeSnapshotContent` that was created by the Velero CSI plugin. - -.. Create a `Restore` CR and set `restorePVs` to `true`. -+ -.Example `Restore` CR -[source,yaml] ----- -apiVersion: velero.io/v1 -kind: Restore -metadata: - name: - namespace: -spec: - backupName: - restorePVs: true ----- - -.. Wait up to 10 minutes and check whether the `VolumeSnapshotRestore` CR status is `Completed` by entering the following command: -+ -[source,terminal] ----- -$ oc get vsr -n ----- -+ -[source,terminal] ----- -$ oc get vsr -n -o jsonpath="{.status.phase}" ----- - -.. Check whether your application data and resources have been restored. -+ -[NOTE] -==== -If the status of the `VolumeSnapshotRestore` CR becomes 'Failed', refer to the Velero logs for troubleshooting. -==== diff --git a/installing/installing_aws/ipi/installing-aws-network-customizations.adoc b/installing/installing_aws/ipi/installing-aws-network-customizations.adoc deleted file mode 100644 index b90148a10a28..000000000000 --- a/installing/installing_aws/ipi/installing-aws-network-customizations.adoc +++ /dev/null @@ -1,133 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="installing-aws-network-customizations"] -= Installing a cluster on AWS with network customizations -include::_attributes/common-attributes.adoc[] -:context: installing-aws-network-customizations - -toc::[] - -In {product-title} version {product-version}, you can install a cluster on -Amazon Web Services (AWS) with customized network configuration options. By -customizing your network configuration, your cluster can coexist with existing -IP address allocations in your environment and integrate with existing MTU and -VXLAN configurations. - -You must set most of the network configuration parameters during installation, -and you can modify only `kubeProxy` configuration parameters in a running -cluster. - -== Prerequisites - -* You reviewed details about the xref:../../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes. -* You read the documentation on xref:../../../installing/overview/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users]. -* You xref:../../../installing/installing_aws/installing-aws-account.adoc#installing-aws-account[configured an AWS account] to host the cluster. -+ -[IMPORTANT] -==== -If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see link:https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html[Managing Access Keys for IAM Users] in the AWS documentation. You can supply the keys when you run the installation program. -==== -* If you use a firewall, you xref:../../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to. - -include::modules/nw-network-config.adoc[leveloffset=+1] - -include::modules/installation-initializing.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* xref:../../../installing/installing_aws/installation-config-parameters-aws.adoc#installation-config-parameters-aws[Installation configuration parameters for AWS] - -include::modules/installation-minimum-resource-requirements.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Optimizing storage] - -include::modules/installation-aws-tested-machine-types.adoc[leveloffset=+2] -include::modules/installation-aws-arm-tested-machine-types.adoc[leveloffset=+2] - -include::modules/installation-aws-config-yaml-customizations.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../installing/installing_aws/installation-config-parameters-aws.adoc#installation-config-parameters-aws[Installation configuration parameters for AWS] - -include::modules/installation-configure-proxy.adoc[leveloffset=+2] - -[id="installing-aws-manual-modes_{context}"] -== Alternatives to storing administrator-level secrets in the kube-system project - -By default, administrator secrets are stored in the `kube-system` project. If you configured the `credentialsMode` parameter in the `install-config.yaml` file to `Manual`, you must use one of the following alternatives: - -* To manage long-term cloud credentials manually, follow the procedure in xref:../../../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#manually-create-iam_installing-aws-network-customizations[Manually creating long-term credentials]. - -* To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in xref:../../../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#installing-aws-with-short-term-creds_installing-aws-network-customizations[Configuring an AWS cluster to use short-term credentials]. - -//Manually creating long-term credentials -include::modules/manually-create-identity-access-management.adoc[leveloffset=+2] - -//Supertask: Configuring an AWS cluster to use short-term credentials -[id="installing-aws-with-short-term-creds_{context}"] -=== Configuring an AWS cluster to use short-term credentials - -To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster. - -//Task part 1: Configuring the Cloud Credential Operator utility -include::modules/cco-ccoctl-configuring.adoc[leveloffset=+3] - -//Task part 2: Creating the required AWS resources -[id="sts-mode-create-aws-resources-ccoctl_{context}"] -==== Creating AWS resources with the Cloud Credential Operator utility - -You have the following options when creating AWS resources: - -* You can use the `ccoctl aws create-all` command to create the AWS resources automatically. This is the quickest way to create the resources. See xref:../../../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#cco-ccoctl-creating-at-once_installing-aws-network-customizations[Creating AWS resources with a single command]. - -* If you need to review the JSON files that the `ccoctl` tool creates before modifying AWS resources, or if the process the `ccoctl` tool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See xref:../../../installing/installing_aws/ipi/installing-aws-network-customizations.adoc#cco-ccoctl-creating-individually_installing-aws-network-customizations[Creating AWS resources individually]. - -//Task part 2a: Creating the required AWS resources all at once -include::modules/cco-ccoctl-creating-at-once.adoc[leveloffset=+4] - -//Task part 2b: Creating the required AWS resources individually -include::modules/cco-ccoctl-creating-individually.adoc[leveloffset=+4] - -//Task part 3: Incorporating the Cloud Credential Operator utility manifests -include::modules/cco-ccoctl-install-creating-manifests.adoc[leveloffset=+3] - -// Network Operator specific configuration -include::modules/nw-operator-cr.adoc[leveloffset=+1] -include::modules/nw-modifying-operator-install-config.adoc[leveloffset=+1] - - -[NOTE] -==== -For more information on using a Network Load Balancer (NLB) on AWS, see xref:../../../networking/ingress_load_balancing/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-aws.adoc#nw-configuring-ingress-cluster-traffic-aws-network-load-balancer_configuring-ingress-cluster-traffic-aws[Configuring Ingress cluster traffic on AWS using a Network Load Balancer]. -==== - -include::modules/nw-aws-nlb-new-cluster.adoc[leveloffset=+1] - -include::modules/configuring-hybrid-ovnkubernetes.adoc[leveloffset=+1] - -[NOTE] -==== -For more information about using Linux and Windows nodes in the same cluster, see xref:../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads]. -==== - -include::modules/installation-launching-installer.adoc[leveloffset=+1] - -include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] - -include::modules/logging-in-by-using-the-web-console.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* See xref:../../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console. - -== Next steps - -* xref:../../../installing/validation_and_troubleshooting/validating-an-installation.adoc#validating-an-installation[Validating an installation]. -* xref:../../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster]. -* If necessary, you can xref:../../../support/remote_health_monitoring/remote-health-reporting.adoc#remote-health-reporting[Remote health reporting]. -* If necessary, you can xref:../../../post_installation_configuration/changing-cloud-credentials-configuration.adoc#manually-removing-cloud-creds_changing-cloud-credentials-configuration[remove cloud provider credentials]. diff --git a/installing/installing_azure/ipi/installing-azure-network-customizations.adoc b/installing/installing_azure/ipi/installing-azure-network-customizations.adoc deleted file mode 100644 index eb3c27fa2c55..000000000000 --- a/installing/installing_azure/ipi/installing-azure-network-customizations.adoc +++ /dev/null @@ -1,101 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="installing-azure-network-customizations"] -= Installing a cluster on Azure with network customizations -include::_attributes/common-attributes.adoc[] -:context: installing-azure-network-customizations - -toc::[] - -In {product-title} version {product-version}, you can install a cluster with a -customized network configuration on infrastructure that the installation program -provisions on Microsoft Azure. By customizing your network configuration, your -cluster can coexist with existing IP address allocations in your environment and -integrate with existing MTU and VXLAN configurations. - -You must set most of the network configuration parameters during installation, -and you can modify only `kubeProxy` configuration parameters in a running -cluster. - -include::modules/installation-initializing.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* xref:../../../installing/installing_azure/installation-config-parameters-azure.adoc#installation-config-parameters-azure[Installation configuration parameters for Azure] - -include::modules/installation-minimum-resource-requirements.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Optimizing storage] - -include::modules/installation-azure-tested-machine-types.adoc[leveloffset=+2] - -include::modules/installation-azure-arm-tested-machine-types.adoc[leveloffset=+2] - -include::modules/installation-azure-trusted-launch.adoc[leveloffset=+2] -include::modules/installation-azure-confidential-vms.adoc[leveloffset=+2] - -include::modules/installation-azure-dedicated-disks.adoc[leveloffset=+2] - -include::modules/installation-azure-config-yaml.adoc[leveloffset=+2] - -include::modules/installation-configure-proxy.adoc[leveloffset=+2] - -// Network Operator specific configuration -include::modules/nw-network-config.adoc[leveloffset=+1] -include::modules/nw-modifying-operator-install-config.adoc[leveloffset=+1] -include::modules/nw-operator-cr.adoc[leveloffset=+1] -include::modules/configuring-hybrid-ovnkubernetes.adoc[leveloffset=+1] - -[NOTE] -==== -For more information about using Linux and Windows nodes in the same cluster, see xref:../../../windows_containers/understanding-windows-container-workloads.adoc#understanding-windows-container-workloads[Understanding Windows container workloads]. -==== - -[role="_additional-resources"] -.Additional resources - -* For more details about Accelerated Networking, see xref:../../../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-azure-accelerated-networking_creating-machineset-azure[Accelerated Networking for Microsoft Azure VMs]. - -[id="installing-azure-manual-modes_{context}"] -== Alternatives to storing administrator-level secrets in the kube-system project - -By default, administrator secrets are stored in the `kube-system` project. If you configured the `credentialsMode` parameter in the `install-config.yaml` file to `Manual`, you must use one of the following alternatives: - -* To manage long-term cloud credentials manually, follow the procedure in xref:../../../installing/installing_azure/ipi/installing-azure-network-customizations.adoc#manually-create-iam_installing-azure-network-customizations[Manually creating long-term credentials]. - -* To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in xref:../../../installing/installing_azure/ipi/installing-azure-network-customizations.adoc#installing-azure-with-short-term-creds_installing-azure-network-customizations[Configuring an Azure cluster to use short-term credentials]. - -//Manually creating long-term credentials -include::modules/manually-create-identity-access-management.adoc[leveloffset=+2] - -//Supertask: Configuring an Azure cluster to use short-term credentials -[id="installing-azure-with-short-term-creds_{context}"] -=== Configuring an Azure cluster to use short-term credentials - -To install a cluster that uses {entra-first}, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster. - -//Task part 1: Configuring the Cloud Credential Operator utility -include::modules/cco-ccoctl-configuring.adoc[leveloffset=+3] - -//Task part 2: Creating the required Azure resources -include::modules/cco-ccoctl-creating-at-once.adoc[leveloffset=+3] - -// Additional steps for the Cloud Credential Operator utility (`ccoctl`) -include::modules/cco-ccoctl-install-creating-manifests.adoc[leveloffset=+3] - -include::modules/installation-launching-installer.adoc[leveloffset=+1] - -include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* See xref:../../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console. - -== Next steps - -* xref:../../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster]. -* If necessary, you can -xref:../../../support/remote_health_monitoring/remote-health-reporting.adoc#remote-health-reporting[Remote health reporting]. diff --git a/installing/installing_gcp/installing-gcp-network-customizations.adoc b/installing/installing_gcp/installing-gcp-network-customizations.adoc deleted file mode 100644 index 8692c9199638..000000000000 --- a/installing/installing_gcp/installing-gcp-network-customizations.adoc +++ /dev/null @@ -1,137 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -include::_attributes/common-attributes.adoc[] -[id="installing-gcp-network-customizations"] -= Installing a cluster on {gcp-short} with network customizations -:context: installing-gcp-network-customizations - -toc::[] - -In {product-title} version {product-version}, you can install a cluster with a -customized network configuration on infrastructure that the installation program -provisions on {gcp-first}. By customizing your network -configuration, your cluster can coexist with existing IP address allocations in -your environment and integrate with existing MTU and VXLAN configurations. To -customize the installation, you modify parameters in the `install-config.yaml` -file before you install the cluster. - -You must set most of the network configuration parameters during installation, -and you can modify only `kubeProxy` configuration parameters in a running -cluster. - -== Prerequisites - -* You reviewed details about the xref:../../architecture/architecture-installation.adoc#architecture-installation[{product-title} installation and update] processes. -* You read the documentation on xref:../../installing/overview/installing-preparing.adoc#installing-preparing[selecting a cluster installation method and preparing it for users]. -* You xref:../../installing/installing_gcp/installing-gcp-account.adoc#installing-gcp-account[configured a {gcp-short} project] to host the cluster. -* If you use a firewall, you xref:../../installing/install_config/configuring-firewall.adoc#configuring-firewall[configured it to allow the sites] that your cluster requires access to. - -include::modules/cluster-entitlements.adoc[leveloffset=+1] - -include::modules/ssh-agent-using.adoc[leveloffset=+1] - -include::modules/installation-obtaining-installer.adoc[leveloffset=+1] - -include::modules/installation-initializing.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* xref:../../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-config-parameters-gcp[Installation configuration parameters for {gcp-short}] - -include::modules/installation-minimum-resource-requirements.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../scalability_and_performance/optimization/optimizing-storage.adoc#optimizing-storage[Optimizing storage] - -include::modules/installation-gcp-tested-machine-types.adoc[leveloffset=+2] - -include::modules/installation-gcp-tested-machine-types-arm.adoc[leveloffset=+2] - -include::modules/installation-using-gcp-custom-machine-types.adoc[leveloffset=+2] - -include::modules/installation-gcp-enabling-shielded-vms.adoc[leveloffset=+2] - -include::modules/installation-gcp-enabling-confidential-vms.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources -* xref:../../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-configuration-parameters-additional-gcp_installation-config-parameters-gcp[Additional {gcp-first} configuration parameters] - -include::modules/installation-gcp-managing-dns-solution.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources -* xref:../../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-config-parameters-gcp[Installation configuration parameters for {gcp-first}] - -include::modules/installation-gcp-config-yaml.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources - -* xref:../../machine_management/creating_machinesets/creating-machineset-gcp.adoc#machineset-enabling-customer-managed-encryption_creating-machineset-gcp[Enabling customer-managed encryption keys for a compute machine set] - -include::modules/installation-configure-proxy.adoc[leveloffset=+2] - -//Installing the OpenShift CLI by downloading the binary: Moved up to precede `ccoctl` steps, which require the use of `oc` -include::modules/cli-installing-cli.adoc[leveloffset=+1] - -[id="installing-gcp-manual-modes_{context}"] -== Alternatives to storing administrator-level secrets in the kube-system project - -By default, administrator secrets are stored in the `kube-system` project. If you configured the `credentialsMode` parameter in the `install-config.yaml` file to `Manual`, you must use one of the following alternatives: - -* To manage long-term cloud credentials manually, follow the procedure in xref:../../installing/installing_gcp/installing-gcp-network-customizations.adoc#manually-create-iam_installing-gcp-network-customizations[Manually creating long-term credentials]. - -* To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in xref:../../installing/installing_gcp/installing-gcp-network-customizations.adoc#installing-gcp-with-short-term-creds_installing-gcp-network-customizations[Configuring a {gcp-short} cluster to use short-term credentials]. - -//Manually creating long-term credentials -include::modules/manually-create-identity-access-management.adoc[leveloffset=+2] - -//Supertask: Configuring a GCP cluster to use short-term credentials -[id="installing-gcp-with-short-term-creds_{context}"] -=== Configuring a {gcp-short} cluster to use short-term credentials - -To install a cluster that is configured to use {gcp-short} Workload Identity, you must configure the CCO utility and create the required {gcp-short} resources for your cluster. - -//Task part 1: Configuring the Cloud Credential Operator utility -include::modules/cco-ccoctl-configuring.adoc[leveloffset=+3] - -//Task part 2: Creating the required GCP resources -include::modules/cco-ccoctl-creating-at-once.adoc[leveloffset=+3] - -//Task part 3: Incorporating the Cloud Credential Operator utility manifests -include::modules/cco-ccoctl-install-creating-manifests.adoc[leveloffset=+3] - -// Network Operator specific configuration -include::modules/nw-network-config.adoc[leveloffset=+1] -include::modules/nw-modifying-operator-install-config.adoc[leveloffset=+1] -include::modules/nw-operator-cr.adoc[leveloffset=+1] - -include::modules/installation-launching-installer.adoc[leveloffset=+1] - -include::modules/installation-gcp-provisioning-dns-records.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* xref:../../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-configuration-parameters-additional-gcp_installation-config-parameters-gcp[Additional {gcp-first} configuration parameters] - -include::modules/cli-logging-in-kubeadmin.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* See xref:../../web_console/web-console.adoc#web-console[Accessing the web console] for more details about accessing and understanding the {product-title} web console. - -include::modules/cluster-telemetry.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* See xref:../../support/remote_health_monitoring/about-remote-health-monitoring.adoc#about-remote-health-monitoring[About remote health monitoring] for more information about the Telemetry service - -== Next steps - -* xref:../../post_installation_configuration/cluster-tasks.adoc#available_cluster_customizations[Customize your cluster]. -* If necessary, you can -xref:../../support/remote_health_monitoring/remote-health-reporting.adoc#remote-health-reporting[Remote health reporting]. diff --git a/machine_management/adding-rhel-compute.adoc b/machine_management/adding-rhel-compute.adoc deleted file mode 100644 index 04bf6523119e..000000000000 --- a/machine_management/adding-rhel-compute.adoc +++ /dev/null @@ -1,52 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="adding-rhel-compute"] -= Adding RHEL compute machines to an {product-title} cluster -include::_attributes/common-attributes.adoc[] -:context: adding-rhel-compute - -toc::[] - -In {product-title}, you can add {op-system-base-full} compute machines to a user-provisioned infrastructure cluster or an installation-provisioned infrastructure cluster on the `x86_64` architecture. You can use {op-system-base} as the operating system only on compute machines. - -include::modules/rhel-compute-overview.adoc[leveloffset=+1] - -include::modules/rhel-compute-requirements.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-deleting_nodes-nodes-working[Deleting nodes] -* xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-azure-accelerated-networking_creating-machineset-azure[Accelerated Networking for Microsoft Azure VMs] - -include::modules/csr-management.adoc[leveloffset=+2] - -[id="adding-rhel-compute-preparing-image-cloud"] -== Preparing an image for your cloud - -Amazon Machine Images (AMI) are required because various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You will need a valid AMI ID so that the correct {op-system-base} version needed for the compute machines is selected. - -include::modules/rhel-images-aws.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources -* You may also manually link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter5-section_2[import {op-system-base} images to AWS]. - -include::modules/rhel-preparing-playbook-machine.adoc[leveloffset=+1] - -include::modules/rhel-preparing-node.adoc[leveloffset=+1] - -include::modules/rhel-attaching-instance-aws.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* See xref:../installing/installing_aws/installing-aws-account.adoc#installation-aws-permissions-iam-roles_installing-aws-account[Required AWS permissions for IAM roles]. - -include::modules/rhel-worker-tag.adoc[leveloffset=+1] - -include::modules/rhel-adding-node.adoc[leveloffset=+1] - -include::modules/installation-approve-csrs.adoc[leveloffset=+1] - -include::modules/rhel-ansible-parameters.adoc[leveloffset=+1] - -include::modules/rhel-removing-rhcos.adoc[leveloffset=+2] diff --git a/machine_management/more-rhel-compute.adoc b/machine_management/more-rhel-compute.adoc deleted file mode 100644 index be01a0a78066..000000000000 --- a/machine_management/more-rhel-compute.adoc +++ /dev/null @@ -1,48 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="more-rhel-compute"] -= Adding more RHEL compute machines to an {product-title} cluster -include::_attributes/common-attributes.adoc[] -:context: more-rhel-compute - -toc::[] - -If your {product-title} cluster already includes Red Hat Enterprise Linux (RHEL) compute machines, which are also known as worker machines, you can add more RHEL compute machines to it. - -include::modules/rhel-compute-overview.adoc[leveloffset=+1] - -include::modules/rhel-compute-requirements.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources - -* xref:../nodes/nodes/nodes-nodes-working.adoc#nodes-nodes-working-deleting_nodes-nodes-working[Deleting nodes] -* xref:../machine_management/creating_machinesets/creating-machineset-azure.adoc#machineset-azure-accelerated-networking_creating-machineset-azure[Accelerated Networking for Microsoft Azure VMs] - -include::modules/csr-management.adoc[leveloffset=+2] - -[id="more-rhel-compute-preparing-image-cloud"] -== Preparing an image for your cloud - -Amazon Machine Images (AMI) are required since various image formats cannot be used directly by AWS. You may use the AMIs that Red Hat has provided, or you can manually import your own images. The AMI must exist before the EC2 instance can be provisioned. You must list the AMI IDs so that the correct {op-system-base} version needed for the compute machines is selected. - -include::modules/rhel-images-aws.adoc[leveloffset=+2] - -[role="_additional-resources"] -.Additional resources -* You may also manually link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/image_builder_guide/sect-documentation-image_builder-chapter5-section_2[import {op-system-base} images to AWS]. - -include::modules/rhel-preparing-node.adoc[leveloffset=+1] - -include::modules/rhel-attaching-instance-aws.adoc[leveloffset=+1] - -[role="_additional-resources"] -.Additional resources -* See xref:../installing/installing_aws/installing-aws-account.adoc#installation-aws-permissions-iam-roles_installing-aws-account[Required AWS permissions for IAM roles]. - -include::modules/rhel-worker-tag.adoc[leveloffset=+1] - -include::modules/rhel-adding-more-nodes.adoc[leveloffset=+1] - -include::modules/installation-approve-csrs.adoc[leveloffset=+1] - -include::modules/rhel-ansible-parameters.adoc[leveloffset=+1] diff --git a/networking/networking/networking_operators/ingress-operator.adoc b/networking/networking/networking_operators/ingress-operator.adoc deleted file mode 100644 index 71b33a851cd1..000000000000 --- a/networking/networking/networking_operators/ingress-operator.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY - diff --git a/security/certificate_types_descriptions/certificate-types-descriptions-index.adoc b/security/certificate_types_descriptions/certificate-types-descriptions-index.adoc deleted file mode 100644 index 7ecb92a38c21..000000000000 --- a/security/certificate_types_descriptions/certificate-types-descriptions-index.adoc +++ /dev/null @@ -1,13 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="ocp-certificates"] -= Certificate types and descriptions -include::_attributes/common-attributes.adoc[] -:context: ocp-certificates - -toc::[] - -== Certificate validation - -{product-title} monitors certificates for proper validity, for the cluster certificates it issues and manages. The {product-title} alerting framework has rules to help identify when a certificate issue is about to occur. These rules consist of the following checks: - -* API server client certificate expiration is less than five minutes. diff --git a/security/rh-required-whitelisted-IP-addresses-for-sre-access.adoc b/security/rh-required-whitelisted-IP-addresses-for-sre-access.adoc deleted file mode 100644 index 4c0ef5ccf387..000000000000 --- a/security/rh-required-whitelisted-IP-addresses-for-sre-access.adoc +++ /dev/null @@ -1,39 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="rh-required-whitelisted-IP-addresses-for-sre-access_{context}"] -include::_attributes/attributes-openshift-dedicated.adoc[] -include::_attributes/common-attributes.adoc[] -= Required allowlist IP addresses for SRE cluster access - -:context: rh-required-whitelisted-IP-addresses-for-sre-access - -toc::[] - -[id="required-whitelisted-overview_{context}"] -== Overview - -For Red Hat SREs to troubleshoot any issues within {product-title} clusters, they must have ingress access to the API server through allowlist IP addresses. - -[id="required-whitelisted-access_{context}"] -== Obtaining allowlisted IP addresses -{product-title} users can use an {cluster-manager} CLI command to obtain the most up-to-date allowlist IP addresses for the Red Hat machines that are necessary for SRE access to {product-title} clusters. - -[NOTE] -==== -These allowlist IP addresses are not permanent and are subject to change. You must continuously review the API output for the most current allowlist IP addresses. -==== -.Prerequisites -* You installed the link:https://console.redhat.com/openshift/downloads[OpenShift Cluster Manager API command-line interface (`ocm`)]. -* You are able to configure your firewall to include the allowlist IP addresses. - -.Procedure -. To get the current allowlist IP addresses needed for SRE access to your {product-title} cluster, run the following command: -+ -[source,terminal] ----- -$ ocm get /api/clusters_mgmt/v1/trusted_ip_addresses|jq -r '.items[].id' ----- -. Configure your firewall to grant access to the allowlist IP addresses. - - - - diff --git a/security/zero_trust_workload_identity_manager/zero-trust-manager-features.adoc b/security/zero_trust_workload_identity_manager/zero-trust-manager-features.adoc deleted file mode 100644 index d31e22985314..000000000000 --- a/security/zero_trust_workload_identity_manager/zero-trust-manager-features.adoc +++ /dev/null @@ -1,18 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="zero-trust-manager-features"] -= Zero Trust Workload Identity Manager components and features - -include::_attributes/common-attributes.adoc[] -:context: zero-trust-manager-features - -// SPIFFE SPIRE components -include::modules/zero-trust-manager-about-components.adoc[leveloffset=+1] - -//SPIRE features -include::modules/zero-trust-manager-about-features.adoc[leveloffset=+1] - - - - - - diff --git a/storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc b/storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc deleted file mode 100644 index b159db09bd1a..000000000000 --- a/storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc +++ /dev/null @@ -1,90 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="osd-persistent-storage-aws-efs-csi"] -= Setting up AWS Elastic File Service CSI Driver Operator -include::_attributes//attributes-openshift-dedicated.adoc[] -:context: osd-persistent-storage-aws-efs-csi -toc::[] - -// Content similar to persistent-storage-csi-aws-efs.adoc. Modules are reused. - -[IMPORTANT] -==== -This procedure is specific to the link:https://github.com/openshift/aws-efs-csi-driver-operator[AWS EFS CSI Driver Operator] (a Red Hat operator), which is only applicable for {product-title} 4.10 and later versions. -==== - -== Overview - -{product-title} is capable of provisioning persistent volumes (PVs) using the link:https://github.com/openshift/aws-efs-csi-driver[AWS EFS CSI driver]. - -Familiarity with link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/storage/index#persistent-storage-overview_understanding-persistent-storage[persistent storage] and link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/storage/index#persistent-storage-csi[configuring CSI volumes] is recommended when working with a CSI Operator and driver. - -After installing the AWS EFS CSI Driver Operator, {product-title} installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the `openshift-cluster-csi-drivers` namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets. - -* The _AWS EFS CSI Driver Operator_, after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS `StorageClass`. -The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. -This eliminates the need for cluster administrators to pre-provision storage. - -* The _AWS EFS CSI driver_ enables you to create and mount AWS EFS PVs. - -[NOTE] -==== -Amazon Elastic File Storage (Amazon EFS) only supports regional volumes, not zonal volumes. -==== - -include::modules/persistent-storage-csi-about.adoc[leveloffset=+1] - -:FeatureName: AWS EFS -include::modules/persistent-storage-efs-csi-driver-operator-setup.adoc[leveloffset=+1] - -include::modules/persistent-storage-csi-olm-operator-install.adoc[leveloffset=+2] -.Next steps -ifdef::openshift-rosa[] -* If you are using Amazon EFS with AWS Secure Token Service (STS), you must configure the {FeatureName} CSI driver with STS. For more information, see xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#efs-sts_osd-persistent-storage-aws-efs-csi[Configuring {FeatureName} CSI Driver with STS]. -endif::openshift-rosa[] -ifdef::openshift-dedicated[] -* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#persistent-storage-csi-efs-driver-install_osd-persistent-storage-aws-efs-csi[Installing the {FeatureName} CSI Driver] -endif::openshift-dedicated[] - -// Separate procedure for OSD and ROSA. -ifdef::openshift-rosa[] -include::modules/osd-persistent-storage-csi-efs-sts.adoc[leveloffset=+2] -[role="_additional-resources"] -.Additional resources -* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#persistent-storage-csi-olm-operator-install_osd-persistent-storage-aws-efs-csi[Installing the {FeatureName} CSI Driver Operator] -* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#persistent-storage-csi-efs-driver-install_osd-persistent-storage-aws-efs-csi[Installing the {FeatureName} CSI Driver] -endif::openshift-rosa[] - -include::modules/persistent-storage-csi-efs-driver-install.adoc[leveloffset=+2] - -:StorageClass: AWS EFS -:Provisioner: efs.csi.aws.com -include::modules/storage-create-storage-class.adoc[leveloffset=+1] -include::modules/storage-create-storage-class-console.adoc[leveloffset=+2] -include::modules/storage-create-storage-class-cli.adoc[leveloffset=+2] - -include::modules/persistent-storage-csi-efs-create-volume.adoc[leveloffset=+1] - -include::modules/persistent-storage-csi-dynamic-provisioning-aws-efs.adoc[leveloffset=+1] -If you have problems setting up dynamic provisioning, see xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#efs-troubleshooting_osd-persistent-storage-aws-efs-csi[Amazon Elastic File Storage troubleshooting]. -[role="_additional-resources"] -.Additional resources -* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#efs-create-volume_osd-persistent-storage-aws-efs-csi[Creating and configuring access to Amazon EFS volume(s)] -* xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#storage-create-storage-class_osd-persistent-storage-aws-efs-csi[Creating the {FeatureName} storage class] - -// Undefine {StorageClass} attribute, so that any mistakes are easily spotted -:!StorageClass: - -include::modules/persistent-storage-csi-efs-static-pv.adoc[leveloffset=+1] -If you have problems setting up static PVs, see xref:../../storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc#efs-troubleshooting_osd-persistent-storage-aws-efs-csi[Amazon Elastic File Storage troubleshooting]. - -include::modules/persistent-storage-csi-efs-security.adoc[leveloffset=+1] - -include::modules/persistent-storage-csi-efs-troubleshooting.adoc[leveloffset=+1] - -:FeatureName: AWS EFS -include::modules/persistent-storage-csi-olm-operator-uninstall.adoc[leveloffset=+1] - -[role="_additional-resources"] -== Additional resources - -* link:https://access.redhat.com/documentation/en-us/openshift_container_platform/latest/html-single/storage/index#persistent-storage-csi[Configuring CSI volumes] diff --git a/welcome/about-hcp.adoc b/welcome/about-hcp.adoc deleted file mode 100644 index 1207346b78f9..000000000000 --- a/welcome/about-hcp.adoc +++ /dev/null @@ -1,102 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="about-hcp"] -= Learn more about ROSA with HCP -include::_attributes/common-attributes.adoc[] -include::_attributes/attributes-openshift-dedicated.adoc[] -:context: about-hcp - -toc::[] - -{hcp-title-first} offers a reduced-cost solution to create a managed ROSA cluster with a focus on efficiency. You can quickly create a new cluster and deploy applications in minutes. - -== Key features of {hcp-title} - -* {hcp-title} requires a minimum of only two nodes, making it ideal for smaller projects while still being able to scale to support larger projects and enterprises. - -* The underlying control plane infrastructure is fully managed. Control plane components, such as the API server and etcd database, are hosted in a Red{nbsp}Hat-owned AWS account. - -* Provisioning time is approximately 10 minutes. - -* Customers can upgrade the control plane and machine pools separately, which means they do not have to shut down the entire cluster during upgrades. - -== Getting started with {hcp-title} - -Use the following sections to find content to help you learn about and use {hcp-title}. - -[id="architect"] -=== Architect - -[options="header",cols="3*"] -|=== -| Learn about {hcp-title} |Plan {hcp-title} deployment |Additional resources - -| xref:../architecture/index.adoc#architecture-overview[Architecture overview] -| xref:../backup_and_restore/application_backup_and_restore/oadp-intro.adoc#oadp-api[Back up and restore] -ifdef::openshift-rosa-hcp[] -| xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-life-cycle.adoc#rosa-hcp-life-cycle[{hcp-title} life cycle] -endif::openshift-rosa-hcp[] -| xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[{hcp-title} architecture] -ifdef::openshift-rosa-hcp[] -| xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-service-definition.adoc#rosa-hcp-service-definition[{hcp-title} service definition] -endif::openshift-rosa-hcp[] -| -| -| xref:../support/index.adoc#support-overview[Getting support] -|=== - - -[id="cluster-administrator"] -=== Cluster Administrator - -[options="header",cols="4*"] -|=== -|Learn about {hcp-title} |Deploy {hcp-title} |Manage {hcp-title} |Additional resources - -| xref:../architecture/rosa-architecture-models.adoc#rosa-architecture-models[{hcp-title} architecture] -| xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-hcp-sts-creating-a-cluster-quickly[Installing {hcp-title}] -// | xref :../observability/logging/cluster-logging.adoc#cluster-logging[Logging] -| xref:../support/index.adoc#support-overview[Getting Support] - -| link:https://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001Css5AAC[OpenShift Interactive Learning Portal] -| xref:../storage/index.adoc#storage-overview[Storage] -| xref:../observability/monitoring/about-ocp-monitoring/about-ocp-monitoring.adoc#about-ocp-monitoring[About {product-title} monitoring] -ifdef::openshift-rosa-hcp[] -| xref:../rosa_architecture/rosa_policy_service_definition/rosa-hcp-life-cycle.adoc#rosa-hcp-life-cycle[{hcp-title} life cycle] -endif::openshift-rosa-hcp[] -| -| xref:../backup_and_restore/application_backup_and_restore/oadp-intro.adoc#oadp-api[Back up and restore] -| -| -//adding condition to get hcp upgrading PR built -ifdef::openshift-rosa-hcp[] -xref:../upgrading/rosa-hcp-upgrading.adoc#rosa-hcp-upgrading[Upgrading] -endif::openshift-rosa-hcp[] -| - -|=== - - -[id="Developer"] -=== Developer - -[options="header",cols="3*"] -|=== -|Learn about application development in {hcp-title} |Deploy applications |Additional resources - -| link:https://developers.redhat.com/[Red{nbsp}Hat Developers site] -| xref:../applications/index.adoc#building-applications-overview[Building applications overview] -| xref:../support/index.adoc#support-overview[Getting support] - -| link:https://developers.redhat.com/products/openshift-dev-spaces/overview[{openshift-dev-spaces-productname} (formerly Red{nbsp}Hat CodeReady Workspaces)] -| xref:../operators/index.adoc#operators-overview[Operators overview] -| - -| -| xref:../openshift_images/index.adoc#overview-of-images[Images] -| - -| -| xref:../cli_reference/odo-important-update.adoc#odo-important_update[Developer-focused CLI] -| - -|=== diff --git a/welcome/cloud-experts-rosa-hcp-sts-explained.adoc b/welcome/cloud-experts-rosa-hcp-sts-explained.adoc deleted file mode 100644 index 98acd003d481..000000000000 --- a/welcome/cloud-experts-rosa-hcp-sts-explained.adoc +++ /dev/null @@ -1,129 +0,0 @@ -:_mod-docs-content-type: ASSEMBLY -[id="cloud-experts-rosa-hcp-sts-explained"] -= AWS STS and ROSA with HCP explained -include::_attributes/common-attributes.adoc[] -include::_attributes/attributes-openshift-dedicated.adoc[] -:context: cloud-experts-rosa-hcp-sts-explained - -toc::[] - -//rosaworkshop.io content metadata -//Brought into ROSA product docs 2023-10-26 -//Modified for HCP 2024-4-16 - -{hcp-title-first} uses an AWS (Amazon Web Services) Security Token Service (STS) for AWS Identity Access Management (IAM) to obtain the necessary credentials to interact with resources in your AWS account. - -[id="credential-methods-rosa-hcp"] -== AWS STS credential method -As part of {hcp-title}, Red{nbsp}Hat must be granted the necessary permissions to manage infrastructure resources in your AWS account. -{hcp-title} grants the cluster's automation software limited, short-term access to resources in your AWS account. - -The STS method uses predefined roles and policies to grant temporary, least-privilege permissions to IAM roles. The credentials typically expire an hour after being requested. Once expired, they are no longer recognized by AWS and no longer have account access from API requests made with them. For more information, see the link:https://docs.aws.amazon.com/STS/latest/APIReference/welcome.html[AWS documentation]. - -AWS IAM STS roles must be created for each {hcp-title} cluster. The ROSA command-line interface (CLI) (`rosa`) manages the STS roles and helps you attach the ROSA-specific, AWS-managed policies to each role. The CLI provides the commands and files to create the roles, attach the AWS-managed policies, and an option to allow the CLI to automatically create the roles and attach the policies. -//See [insert new xref when we have one for HCP] for more information about the different `--mode` options. - -[id="hcp-sts-security"] -== AWS STS security -Security features for AWS STS include: - -* An explicit and limited set of policies that the user creates ahead of time. -** The user can review every requested permission needed by the platform. -* The service cannot do anything outside of those permissions. -* There is no need to rotate or revoke credentials. Whenever the service needs to perform an action, it obtains credentials that expire in one hour or less. -* Credential expiration reduces the risks of credentials leaking and being reused. - -{hcp-title} grants cluster software components least-privilege permissions with short-term security credentials to specific and segregated IAM roles. The credentials are associated with IAM roles specific to each component and cluster that makes AWS API calls. This method aligns with principles of least-privilege and secure practices in cloud service resource management. - -[id="components-specific-to-rosa-hcp-with-sts"] -== Components of {hcp-title} -* *AWS infrastructure* - The infrastructure required for the cluster including the Amazon EC2 instances, Amazon EBS storage, and networking components. See xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-aws-compute-types_rosa-service-definition[AWS compute types] to see the supported instance types for compute nodes and xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-ec2-instances_rosa-sts-aws-prereqs[provisioned AWS infrastructure] for more information on cloud resource configuration. -// This section needs to remain hidden until the HCP migration is completed. -// * *AWS infrastructure* - The infrastructure required for the cluster including the Amazon EC2 instances, Amazon EBS storage, and networking components. See xref:../rosa_architecture/rosa_policy_service_definition/rosa-service-definition.adoc#rosa-sdpolicy-aws-compute-types_rosa-service-definition[AWS compute types] to see the supported instance types for compute nodes and xref:../rosa_planning/rosa-sts-aws-prereqs.adoc#rosa-ec2-instances_rosa-sts-aws-prereqs[provisioned AWS infrastructure] for more information on cloud resource configuration. -* *AWS STS* - A method for granting short-term, dynamic tokens to provide users the necessary permissions to temporarily interact with your AWS account resources. -* *OpenID Connect (OIDC)* - A mechanism for cluster Operators to authenticate with AWS, assume the cluster roles through a trust policy, and obtain temporary credentials from AWS IAM STS to make the required API calls. -* *Roles and policies* - The roles and policies used by {hcp-title} can be divided into account-wide roles and policies and Operator roles and policies. -+ -The policies determine the allowed actions for each of the roles. -ifdef::openshift-rosa[] -See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] for more details about the individual roles and policies. See xref:../rosa_planning/rosa-sts-ocm-role.adoc#rosa-sts-ocm-role[ROSA IAM role resource] for more details about trust policies. -endif::openshift-rosa[] -ifdef::openshift-rosa-hcp[] -See xref:../rosa_architecture/rosa-sts-about-iam-resources.adoc#rosa-sts-about-iam-resources[About IAM resources] for more details about the individual roles and policies. See xref:../rosa_planning/rosa-hcp-prepare-iam-roles-resources.adoc#rosa-hcp-prepare-iam-roles-resources[Required IAM roles and resources] for more details on preparing these resources in your cluster. -endif::openshift-rosa-hcp[] -+ --- -** The account-wide roles are: - -*** `-HCP-ROSA-Worker-Role` -*** `-HCP-ROSA-Support-Role` -*** `-HCP-ROSA-Installer-Role` - -** The account-wide AWS-managed policies are: - -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAInstallerPolicy.html[ROSAInstallerPolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAWorkerInstancePolicy.html[ROSAWorkerInstancePolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSASRESupportPolicy.html[ROSASRESupportPolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAIngressOperatorPolicy.html[ROSAIngressOperatorPolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAAmazonEBSCSIDriverOperatorPolicy.html[ROSAAmazonEBSCSIDriverOperatorPolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSACloudNetworkConfigOperatorPolicy.html[ROSACloudNetworkConfigOperatorPolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAControlPlaneOperatorPolicy.html[ROSAControlPlaneOperatorPolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAImageRegistryOperatorPolicy.html[ROSAImageRegistryOperatorPolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAKMSProviderPolicy.html[ROSAKMSProviderPolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAKubeControllerPolicy.html[ROSAKubeControllerPolicy] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSAManageSubscription.html[ROSAManageSubscription] -*** link:https://docs.aws.amazon.com/aws-managed-policy/latest/reference/ROSANodePoolManagementPolicy.html[ROSANodePoolManagementPolicy] --- -+ -[NOTE] -==== -Certain policies are used by the cluster Operator roles, listed below. The Operator roles are created in a second step because they are dependent on an existing cluster name and cannot be created at the same time as the account-wide roles. -==== -+ -** The Operator roles are: - -*** -openshift-cluster-csi-drivers-ebs-cloud-credentials -*** -openshift-cloud-network-config-controller-cloud-credentials -*** -openshift-machine-api-aws-cloud-credentials -*** -openshift-cloud-credential-operator-cloud-credentials -*** -openshift-image-registry-installer-cloud-credentials -*** -openshift-ingress-operator-cloud-credentials -+ -** Trust policies are created for each account-wide role and each Operator role. - -[id="deploying-rosa-hcp-with-sts-cluster"] -== Deploying a {hcp-title} cluster - -Deploying a {hcp-title} cluster follows the following steps: - -. You create the account-wide roles. -. You create the Operator roles. -. Red{nbsp}Hat uses AWS STS to send the required permissions to AWS that allow AWS to create and attach the corresponding AWS-managed Operator policies. -. You create the OIDC provider. -. You create the cluster. - -During the cluster creation process, the ROSA CLI creates the required JSON files for you and outputs the commands you need. If desired, the ROSA CLI can also run the commands for you. - -The ROSA CLI can automatically create the roles for you, or you can manually create them by using the `--mode manual` or `--mode auto` flags. For further details about deployment, see xref:../rosa_hcp/rosa-hcp-sts-creating-a-cluster-quickly.adoc#rosa-sts-creating-cluster-using-customizations_rosa-sts-creating-a-cluster-with-customizations[Creating a cluster with customizations]. - -[id="hcp-sts-process"] -== {hcp-title} workflow -The user creates the required account-wide roles. During role creation, a trust policy, known as a cross-account trust policy, is created which allows a Red{nbsp}Hat-owned role to assume the roles. Trust policies are also created for the EC2 service, which allows workloads on EC2 instances to assume roles and obtain credentials. AWS assigns a corresponding permissions policy to each role. - -After the account-wide roles and policies are created, the user can create a cluster. Once cluster creation is initiated, the user creates the Operator roles so that cluster Operators can make AWS API calls. These roles are then assigned to the corresponding permission policies that were created earlier and a trust policy with an OIDC provider. The Operator roles differ from the account-wide roles in that they ultimately represent the pods that need access to AWS resources. Because a user cannot attach IAM roles to pods, they must create a trust policy with an OIDC provider so that the Operator, and therefore the pods, can access the roles they need. - -Once the user assigns the roles to the corresponding policy permissions, the final step is creating the OIDC provider. - -image::cloud-experts-sts-explained_creation_flow_hcp.png[] - -When a new role is needed, the workload currently using the Red{nbsp}Hat role will assume the role in the AWS account, obtain temporary credentials from AWS STS, and begin performing the actions using API calls within the user's AWS account as permitted by the assumed role's permissions policy. The credentials are temporary and have a maximum duration of one hour. - -image::cloud-experts-sts-explained_highlevel.png[] - -//The entire workflow is depicted in the following graphic: - -//image::cloud-experts-sts-explained_entire_flow_hcp.png[] - -Operators use the following process to obtain the requisite credentials to perform their tasks. Each Operator is assigned an Operator role, a permissions policy, and a trust policy with an OIDC provider. The Operator will assume the role by passing a JSON web token that contains the role and a token file (`web_identity_token_file`) to the OIDC provider, which then authenticates the signed key with a public key. The public key is created during cluster creation and stored in an S3 bucket. The Operator then confirms that the subject in the signed token file matches the role in the role trust policy which ensures that the OIDC provider can only obtain the allowed role. The OIDC provider then returns the temporary credentials to the Operator so that the Operator can make AWS API calls. For a visual representation, see the following diagram: - -image::cloud-experts-sts-explained_oidc_op_roles_hcp.png[]