diff --git a/modules/annotating-a-route-with-a-cookie-name.adoc b/modules/annotating-a-route-with-a-cookie-name.adoc deleted file mode 100644 index 83b40c321f80..000000000000 --- a/modules/annotating-a-route-with-a-cookie-name.adoc +++ /dev/null @@ -1,37 +0,0 @@ -// Module included in the following assemblies: -// -// *using-cookies-to-keep-route-statefulness - -:_mod-docs-content-type: PROCEDURE -[id="annotating-a-route-with-a-cookie_{context}"] -= Annotating a route with a cookie - -You can set a cookie name to overwrite the default, auto-generated one for the -route. This allows the application receiving route traffic to know the cookie -name. By deleting the cookie it can force the next request to re-choose an -endpoint. So, if a server was overloaded it tries to remove the requests from the -client and redistribute them. - -.Procedure - -. Annotate the route with the desired cookie name: -+ -[source,terminal] ----- -$ oc annotate route router.openshift.io/="-" ----- -+ -For example, to annotate the cookie name of `my_cookie` to the `my_route` with -the annotation of `my_cookie_anno`: -+ -[source,terminal] ----- -$ oc annotate route my_route router.openshift.io/my_cookie="-my_cookie_anno" ----- - -. Save the cookie, and access the route: -+ -[source,terminal] ----- -$ curl $my_route -k -c /tmp/my_cookie ----- diff --git a/modules/capi-yaml-cluster.adoc b/modules/capi-yaml-cluster.adoc deleted file mode 100644 index ec38fc16023b..000000000000 --- a/modules/capi-yaml-cluster.adoc +++ /dev/null @@ -1,52 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/cluster_api_machine_management/cluster-api-getting-started.adoc - -:_mod-docs-content-type: REFERENCE -[id="capi-yaml-cluster_{context}"] -= Sample YAML for a Cluster API cluster resource - -The cluster resource defines the name and infrastructure provider for the cluster and is managed by the Cluster API. -This resource has the same structure for all providers. - -[source,yaml] ----- -apiVersion: cluster.x-k8s.io/v1beta1 -kind: Cluster -metadata: - name: # <1> - namespace: openshift-cluster-api -spec: - controlPlaneEndpoint: # <2> - host: - port: 6443 - infrastructureRef: - apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 - kind: # <3> - name: - namespace: openshift-cluster-api ----- -<1> Specify the name of the cluster. -<2> Specify the IP address of the control plane endpoint and the port used to access it. -<3> Specify the infrastructure kind for the cluster. -The following values are valid: -+ -|==== -|Cluster cloud provider |Value - -|{aws-full} -|`AWSCluster` - -|{gcp-short} -|`GCPCluster` - -|{azure-short} -|`AzureCluster` - -|{rh-openstack} -|`OpenStackCluster` - -|{vmw-full} -|`VSphereCluster` - -|==== \ No newline at end of file diff --git a/modules/cco-short-term-creds-component-permissions-gcp.adoc b/modules/cco-short-term-creds-component-permissions-gcp.adoc deleted file mode 100644 index 1248c24440dd..000000000000 --- a/modules/cco-short-term-creds-component-permissions-gcp.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/managing_cloud_provider_credentials/cco-short-term-creds.adoc - -:_mod-docs-content-type: REFERENCE -[id="cco-short-term-creds-component-permissions-gcp_{context}"] -= GCP component secret permissions requirements - -//This topic is a placeholder for when GCP role granularity can bbe documented \ No newline at end of file diff --git a/modules/configuration-resource-overview.adoc b/modules/configuration-resource-overview.adoc deleted file mode 100644 index ceef1e265043..000000000000 --- a/modules/configuration-resource-overview.adoc +++ /dev/null @@ -1,68 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="configuration-resource-overview_{context}"] -= About Configuration Resources in {product-title} - -You perform many customization and configuration tasks after you deploy your -cluster, including configuring networking and setting your identity provider. - -In {product-title}, you modify Configuration Resources to determine the behavior -of these integrations. The Configuration Resources are controlled by Operators -that are managed by the Cluster Version Operator, which manages all of the -Operators that run your cluster's control plane. - -You can customize the following Configuration Resources: - -[cols="3a,8a",options="header"] -|=== - -|Configuration Resource |Description -|Authentication -| - -|DNS -| - -|Samples -| * *ManagementState:* -** *Managed.* The operator updates the samples as the configuration dictates. -** *Unmanaged.* The operator ignores updates to the samples resource object and -any imagestreams or templates in the `openshift` namespace. -** *Removed.* The operator removes the set of managed imagestreams -and templates in the `openshift` namespace. It ignores new samples created by -the cluster administrator or any samples in the skipped lists. After the removals are -complete, the operator works like it is in the `Unmanaged` state and ignores -any watch events on the sample resources, imagestreams, or templates. It -operates on secrets to facilitate the CENTOS to RHEL switch. There are some -caveats around concurrent create and removal. -* *Samples Registry:* Overrides the registry from which images are imported. -* *Architecture:* Place holder to choose an architecture type. Currently only x86 -is supported. -* *Skipped Imagestreams:* Imagestreams that are in the operator's -inventory, but that the cluster administrator wants the operator to ignore or not manage. -* *Skipped Templates:* Templates that are in the operator's inventory, but that -the cluster administrator wants the operator to ignore or not manage. - -|Infrastructure -| - -|Ingress -| - -|Network -| - -|OAuth -| - -|=== - -While you can complete many other customizations and configure other integrations -with an {product-title} cluster, configuring these resources is a common first -step after you deploy a cluster. - -Like all Operators, the Configuration Resources are governed by -Custom Resource Definitions (CRD). You customize the CRD for each -Configuration Resource that you want to modify in your cluster. diff --git a/modules/configuring-layer-three-routed-topology.adoc b/modules/configuring-layer-three-routed-topology.adoc deleted file mode 100644 index 0b614d892a69..000000000000 --- a/modules/configuring-layer-three-routed-topology.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/multiple_networks/configuring-additional-network.adoc - -:_mod-docs-content-type: CONCEPT -[id="configuration-layer-three-routed-topology_{context}"] -= Configuration for a routed topology - -The routed (layer 3) topology networks are a simplified topology for the cluster default network without egress or ingress. In this topology, there is one logical switch per node, each with a different subnet, and a router interconnecting all logical switches. - -This configuration can be used for IPv6 and dual-stack deployments. - -[NOTE] -==== -* Layer 3 routed topology networks only allow for the transfer of data packets between pods within a cluster. -* Creating a secondary network with an IPv6 subnet or dual-stack subnets fails on a single-stack {product-title} cluster. This is a known limitation and will be fixed a future version of {product-title}. -==== - -The following `NetworkAttachmentDefinition` custom resource definition (CRD) YAML describes the fields needed to configure a routed secondary network. - -[source,yaml] ----- - { - "cniVersion": "0.3.1", - "name": "ns1-l3-network", - "type": "ovn-k8s-cni-overlay", - "topology":"layer3", - "subnets": "10.128.0.0/16/24", - "mtu": 1300, - "netAttachDefName": "ns1/l3-network" - } ----- \ No newline at end of file diff --git a/modules/coreos-layering-configuring-on-extensions.adoc b/modules/coreos-layering-configuring-on-extensions.adoc deleted file mode 100644 index b36b7a64b120..000000000000 --- a/modules/coreos-layering-configuring-on-extensions.adoc +++ /dev/null @@ -1,116 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_configuration/coreos-layering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="coreos-layering-configuring-on-extensions_{context}"] -= Installing extensions into an on-cluster custom layered image - -You can install {op-system-first} extensions into your on-cluster custom layered image by creating a machine config that lists the extensions that you want to install. The Machine Config Operator (MCO) installs the extensions onto the nodes associated with a specific machine config pool (MCP). - -For a list of the currently supported extensions, see "Adding extensions to RHCOS." - -After you make the change, the MCO reboots the nodes associated with the specified machine config pool. - -[NOTE] -==== -include::snippets/coreos-layering-configuring-on-pause.adoc[] -==== - -.Prerequisites - -* You have opted in to {image-mode-os-on-caps} by creating a `MachineOSConfig` object. - -.Procedure - -. Create a YAML file for the machine config similar to the following example: -+ -[source,yaml] ----- -apiVersion: machineconfiguration.openshift.io/v1 <1> -kind: MachineConfig -metadata: - labels: - machineconfiguration.openshift.io/role: worker <2> - name: 80-worker-extensions -spec: - config: - ignition: - version: 3.2.0 - extensions: <3> - - usbguard - - kerberos ----- -<1> Specifies the `machineconfiguration.openshift.io/v1` API that is required for `MachineConfig` CRs. -<2> Specifies the machine config pool to apply the `MachineConfig` object to. -<3> Lists the {op-system-first} extensions that you want to install. - -. Create the MCP object: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- - -.Verification - -. You can watch the build progress by using the following command: -+ -[source,terminal] ----- -$ oc get machineosbuilds ----- -+ -.Example output -[source,terminal] ----- -NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED -layered-f8ab2d03a2f87a2acd449177ceda805d False True False False False <1> ----- -<1> The value `True` in the `BUILDING` column indicates that the `MachineOSBuild` object is building. When the `SUCCEEDED` column reports `TRUE`, the build is complete. - -. You can watch as the new machine config is rolled out to the nodes by using the following command: -+ -[source,terminal] ----- -$ oc get machineconfigpools ----- -+ -.Example output -[source,terminal] ----- -NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE -master rendered-master-a0b404d061a6183cc36d302363422aba True False False 3 3 3 0 3h38m -worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 False True False 2 2 2 0 3h38m <1> ----- -<1> The value `FALSE` in the `UPDATED` column indicates that the `MachineOSBuild` object is building. When the `UPDATED` column reports `FALSE`, the new custom layered image has rolled out to the nodes. - -. When the associated machine config pool is updated, check that the extensions were installed: - -.. Open an `oc debug` session to the node by running the following command: -+ -[source,terminal] ----- -$ oc debug node/ ----- - -.. Set `/host` as the root directory within the debug shell by running the following command: -+ -[source,terminal] ----- -sh-5.1# chroot /host ----- - -.. Use an appropriate command to verify that the extensions were installed. The following example shows that the usbguard extension was installed: -+ -[source,terminal] ----- -sh-5.1# rpm -qa |grep usbguard ----- -+ -.Example output -[source,terminal] ----- -usbguard-selinux-1.0.0-15.el9.noarch -usbguard-1.0.0-15.el9.x86_64 ----- diff --git a/modules/cpmso-feat-vertical-resize.adoc b/modules/cpmso-feat-vertical-resize.adoc deleted file mode 100644 index c0f3ace4e601..000000000000 --- a/modules/cpmso-feat-vertical-resize.adoc +++ /dev/null @@ -1,7 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/cpmso-about.adoc - -:_mod-docs-content-type: CONCEPT -[id="cpmso-feat-vertical-resize_{context}"] -= Vertical resizing of the control plane \ No newline at end of file diff --git a/modules/creating-your-first-content.adoc b/modules/creating-your-first-content.adoc deleted file mode 100644 index 8eee346317ba..000000000000 --- a/modules/creating-your-first-content.adoc +++ /dev/null @@ -1,109 +0,0 @@ -// Module included in the following assemblies: -// -// assembly_getting-started-modular-docs-ocp.adoc - -// Base the file name and the ID on the module title. For example: -// * file name: doing-procedure-a.adoc -// * ID: [id="doing-procedure-a"] -// * Title: = Doing procedure A - -:_mod-docs-content-type: PROCEDURE -[id="creating-your-first-content_{context}"] -= Creating your first content - -In this procedure, you will create your first example content using modular -docs for the OpenShift docs repository. - -.Prerequisites - -* You have forked and then cloned the OpenShift docs repository locally. -* You have downloaded and are using Atom text editor for creating content. -* You have installed AsciiBinder (the build tool for OpenShift docs). - -.Procedure - -. Navigate to your locally cloned OpenShift docs repository on a command line. - -. Create a new feature branch: - -+ ----- -git checkout master -git checkout -b my_first_mod_docs ----- -+ -. If there is no `modules` directory in the root folder, create one. - -. In this `modules` directory, create a file called `my-first-module.adoc`. - -. Open this newly created file in Atom and copy into this file the contents from -the link:https://raw.githubusercontent.com/redhat-documentation/modular-docs/master/modular-docs-manual/files/TEMPLATE_PROCEDURE_doing-one-procedure.adoc[procedure template] -from Modular docs repository. - -. Replace the content in this file with some example text using the guidelines -in the comments. Give this module the title `My First Module`. Save this file. -You have just created your first module. - -. Create a new directory from the root of your OpenShift docs repository and -call it `my_guide`. - -. In this my_guide directory, create a new file called -`assembly_my-first-assembly.adoc`. - -. Open this newly created file in Atom and copy into this file the contents from -the link:https://raw.githubusercontent.com/redhat-documentation/modular-docs/master/modular-docs-manual/files/TEMPLATE_ASSEMBLY_a-collection-of-modules.adoc[assembly template] -from Modular docs repository. - -. Replace the content in this file with some example text using the guidelines -in the comments. Give this assembly the title: `My First Assembly`. - -. Before the first anchor id in this assembly file, add a `:context:` attribute: - -+ -`:context: assembly-first-content` - -. After the Prerequisites section, add the module created earlier (the following is -deliberately spelled incorrectly to pass validation. Use 'include' instead of 'ilude'): - -+ -`ilude::modules/my-first-module.adoc[leveloffset=+1]` - -+ -Remove the other includes that are present in this file. Save this file. - -. Open up `my-first-module.adoc` in the `modules` folder. At the top of -this file, in the comments section, add the following to indicate in which -assembly this module is being used: - -+ ----- -// Module included in the following assemblies: -// -// my_guide/assembly_my-first-assembly.adoc ----- - -. Open up `_topic_map.yml` from the root folder and add these lines at the end -of this file and then save. - -+ ----- ---- -Name: OpenShift CCS Mod Docs First Guide -Dir: my_guide -Distros: openshift-* -Topics: -- Name: My First Assembly - File: assembly_my-first-assembly ----- - -. On the command line, run `asciibinder` from the root folder of openshift-docs. -You do not have to add or commit your changes for asciibinder to run. - -. After the asciibinder build completes, open up your browser and navigate to -/openshift-docs/_preview/openshift-enterprise/my_first_mod_docs/my_guide/assembly_my-first-assembly.html - -. Confirm that your book `my_guide` has an assembly `My First Assembly` with the -contents from your module `My First Module`. - -NOTE: You can delete this branch now if you are done testing. This branch -should not be submitted to the upstream openshift-docs repository. diff --git a/modules/feature-gate-features.adoc b/modules/feature-gate-features.adoc deleted file mode 100644 index 55a72730f25f..000000000000 --- a/modules/feature-gate-features.adoc +++ /dev/null @@ -1,33 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/nodes-cluster-enabling-features.adoc - -[id="feature-gate-features_{context}"] -= Features that are affected by FeatureGates - -The following Technology Preview features included in {product-title}: - -[options="header"] -|=== -| FeatureGate| Description| Default - -|`RotateKubeletServerCertificate` -|Enables the rotation of the server TLS certificate on the cluster. -|True - -|`SupportPodPidsLimit` -|Enables support for limiting the number of processes (PIDs) running in a pod. -|True - -|`MachineHealthCheck` -|Enables automatically repairing unhealthy machines in a machine pool. -|True - -|`LocalStorageCapacityIsolation` -|Enable the consumption of local ephemeral storage and also the `sizeLimit` property of an `emptyDir` volume. -|False - -|=== - -You can enable these features by editing the Feature Gate Custom Resource. -Turning on these features cannot be undone and prevents the ability to upgrade your cluster. diff --git a/modules/importing-manifest-list-through-imagestreamimport.adoc b/modules/importing-manifest-list-through-imagestreamimport.adoc deleted file mode 100644 index 395ad21eb4c5..000000000000 --- a/modules/importing-manifest-list-through-imagestreamimport.adoc +++ /dev/null @@ -1,44 +0,0 @@ -// Module included in the following assemblies: -// * openshift_images/image-streams-manage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="importing-manifest-list-through-imagestreamimport_{context}"] -= Importing a manifest list through ImageStreamImport - - -You can use the `ImageStreamImport` resource to find and import image manifests from other container image registries into the cluster. Individual images or an entire image repository can be imported. - -Use the following procedure to import a manifest list through the `ImageStreamImport` object with the `importMode` value. - -.Procedure - -. Create an `ImageStreamImport` YAML file and set the `importMode` parameter to `PreserveOriginal` on the tags that you will import as a manifest list: -+ -[source,yaml] ----- -apiVersion: image.openshift.io/v1 -kind: ImageStreamImport -metadata: - name: app - namespace: myapp -spec: - import: true - images: - - from: - kind: DockerImage - name: // - to: - name: latest - referencePolicy: - type: Source - importPolicy: - importMode: "PreserveOriginal" ----- - -. Create the `ImageStreamImport` by running the following command: -+ -[source,terminal] ----- -$ oc create -f ----- - diff --git a/modules/install-creating-install-config-aws-local-zones-subnets.adoc b/modules/install-creating-install-config-aws-local-zones-subnets.adoc deleted file mode 100644 index 7dfce6af0479..000000000000 --- a/modules/install-creating-install-config-aws-local-zones-subnets.adoc +++ /dev/null @@ -1,35 +0,0 @@ -// Module included in the following assemblies: -// * installing/installing_aws/installing-aws-localzone.adoc - -:_mod-docs-content-type: PROCEDURE -[id="install-creating-install-config-aws-local-zones-subnets_{context}"] -= Modifying an installation configuration file to use AWS Local Zones subnets - -Modify an `install-config.yaml` file to include AWS Local Zones subnets. - -.Prerequisites - -* You created subnets by using the procedure "Creating a subnet in AWS Local Zones". -* You created an `install-config.yaml` file by using the procedure "Creating the installation configuration file". - -.Procedure - -* Modify the `install-config.yaml` configuration file by specifying Local Zone subnets in the `platform.aws.subnets` property, as demonstrated in the following example: -+ -[source,yaml] ----- -... -platform: - aws: - region: us-west-2 - subnets: <1> - - publicSubnetId-1 - - publicSubnetId-2 - - publicSubnetId-3 - - privateSubnetId-1 - - privateSubnetId-2 - - privateSubnetId-3 - - publicSubnetId-LocalZone-1 -... ----- -<1> List of subnets created in the Availability and Local Zones. \ No newline at end of file diff --git a/modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc b/modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc deleted file mode 100644 index f46247c6a305..000000000000 --- a/modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// This module is included in the following assemblies: -// -// installing/installing_sno/install-sno-preparing-to-install-sno.adoc - -:_mod-docs-content-type: CONCEPT -[id="additional-requirements-for-installing-sno-on-a-cloud-provider_{context}"] -= Additional requirements for installing {sno} on a cloud provider - -The AWS documentation for installer-provisioned installation is written with a high availability cluster consisting of three control plane nodes. When referring to the AWS documentation, consider the differences between the requirements for a {sno} cluster and a high availability cluster. - -* The required machines for cluster installation in AWS documentation indicates a temporary bootstrap machine, three control plane machines, and at least two compute machines. You require only a temporary bootstrap machine and one AWS instance for the control plane node and no worker nodes. - -* The minimum resource requirements for cluster installation in the AWS documentation indicates a control plane node with 4 vCPUs and 100GB of storage. For a single node cluster, you must have a minimum of 8 vCPU cores and 120GB of storage. - -* The `controlPlane.replicas` setting in the `install-config.yaml` file should be set to `1`. - -* The `compute.replicas` setting in the `install-config.yaml` file should be set to `0`. -This makes the control plane node schedulable. diff --git a/modules/installation-aws-editing-manifests.adoc b/modules/installation-aws-editing-manifests.adoc deleted file mode 100644 index 6ceecf30842f..000000000000 --- a/modules/installation-aws-editing-manifests.adoc +++ /dev/null @@ -1,116 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_aws/installing-aws-outposts-remote-workers.adoc - -ifeval::["{context}" == "aws-compute-edge-zone-tasks"] -:post-aws-zones: -endif::[] - -:_mod-docs-content-type: PROCEDURE -[id="installation-aws-creating-manifests_{context}"] -= Generating manifest files - -Use the installation program to generate a set of manifest files in the assets directory. Manifest files are required to specify the AWS Outposts subnets to use for worker machines, and to specify settings required by the network provider. - -If you plan to reuse the `install-config.yaml` file, create a backup file before you generate the manifest files. - -.Procedure - -. Optional: Create a backup copy of the `install-config.yaml` file: -+ -[source,terminal] ----- -$ cp install-config.yaml install-config.yaml.backup ----- - -. Generate a set of manifests in your assets directory: -+ -[source,terminal] ----- -$ openshift-install create manifests --dir ----- -+ -This command displays the following messages. -+ -.Example output -[source,terminal] ----- -INFO Consuming Install Config from target directory -INFO Manifests created in: /manifests and /openshift ----- -+ -The command generates the following manifest files: -+ -.Example output -[source,terminal] ----- -$ tree -. -├── manifests -│  ├── cluster-config.yaml -│  ├── cluster-dns-02-config.yml -│  ├── cluster-infrastructure-02-config.yml -│  ├── cluster-ingress-02-config.yml -│  ├── cluster-network-01-crd.yml -│  ├── cluster-network-02-config.yml -│  ├── cluster-proxy-01-config.yaml -│  ├── cluster-scheduler-02-config.yml -│  ├── cvo-overrides.yaml -│  ├── kube-cloud-config.yaml -│  ├── kube-system-configmap-root-ca.yaml -│  ├── machine-config-server-tls-secret.yaml -│  └── openshift-config-secret-pull-secret.yaml -└── openshift - ├── 99_cloud-creds-secret.yaml - ├── 99_kubeadmin-password-secret.yaml - ├── 99_openshift-cluster-api_master-machines-0.yaml - ├── 99_openshift-cluster-api_master-machines-1.yaml - ├── 99_openshift-cluster-api_master-machines-2.yaml - ├── 99_openshift-cluster-api_master-user-data-secret.yaml - ├── 99_openshift-cluster-api_worker-machineset-0.yaml - ├── 99_openshift-cluster-api_worker-user-data-secret.yaml - ├── 99_openshift-machineconfig_99-master-ssh.yaml - ├── 99_openshift-machineconfig_99-worker-ssh.yaml - ├── 99_role-cloud-creds-secret-reader.yaml - └── openshift-install-manifests.yaml - ----- - -[id="installation-aws-editing-manifests_{context}"] -== Modifying manifest files - -[NOTE] -==== -The AWS Outposts environments has the following limitations which require manual modification in the manifest generated files: - -* The maximum transmission unit (MTU) of a network connection is the size, in bytes, of the largest permissible packet that can be passed over the connection. The Outpost service link supports a maximum packet size of 1300 bytes. For more information about the service link, see link:https://docs.aws.amazon.com/outposts/latest/userguide/region-connectivity.html[Outpost connectivity to AWS Regions] - -You will find more information about how to change these values below. -==== - -* Use Outpost Subnet for workers `machineset` -+ -Modify the following file: -/openshift/99_openshift-cluster-api_worker-machineset-0.yaml -Find the subnet ID and replace it with the ID of the private subnet created in the Outpost. As a result, all the worker machines will be created in the Outpost. - -* Specify MTU value for the Network Provider -+ -Outpost service links support a maximum packet size of 1300 bytes. You must modify the MTU of the Network Provider to follow this requirement. -Create a new file under the manifests directory and name the file `cluster-network-03-config.yml`. For the OVN-Kubernetes network provider, set the MTU value to 1200. -+ -[source,yaml] ----- -apiVersion: operator.openshift.io/v1 -kind: Network -metadata: - name: cluster -spec: - defaultNetwork: - ovnKubernetesConfig: - mtu: 1200 ----- - -ifeval::["{context}" == "aws-compute-edge-zone-tasks"] -:!post-aws-zones: -endif::[] diff --git a/modules/installation-azure-finalizing-encryption.adoc b/modules/installation-azure-finalizing-encryption.adoc deleted file mode 100644 index 4cd4782b1e49..000000000000 --- a/modules/installation-azure-finalizing-encryption.adoc +++ /dev/null @@ -1,154 +0,0 @@ -//Module included in the following assemblies: -// -// * installing/installing_azure/installing-azure-customizations.adoc -// * installing/installing_azure/installing-azure-government-region.adoc -// * installing/installing_azure/installing-azure-private.adoc -// * installing/installing_azure/installing-azure-vnet.adoc - - -ifeval::["{context}" == "installing-azure-customizations"] -:azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-government-region"] -:azure-gov: -endif::[] -ifeval::["{context}" == "installing-azure-network-customizations"] -:azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-private"] -:azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-vnet"] -:azure-public: -endif::[] - -:_mod-docs-content-type: PROCEDURE -[id="finalizing-encryption_{context}"] -= Finalizing user-managed encryption after installation -If you installed {product-title} using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. - -.Procedure - -. Obtain the identity of the cluster resource group used by the installer: -.. If you specified an existing resource group in `install-config.yaml`, obtain its Azure identity by running the following command: -+ -[source,terminal] ----- -$ az identity list --resource-group "" ----- -.. If you did not specify a existing resource group in `install-config.yaml`, locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: -+ -[source,terminal] ----- -$ az group list ----- -+ -[source,terminal] ----- -$ az identity list --resource-group "" ----- -+ -. Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: -+ -[source,terminal] ----- -$ az role assignment create --role "" \// <1> - --assignee "" <2> ----- -<1> Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the `Owner` role or a custom role with the necessary permissions. -<2> Specifies the identity of the cluster resource group. -+ -. Obtain the `id` of the disk encryption set you created prior to installation by running the following command: -+ -[source,terminal] ----- -$ az disk-encryption-set show -n \// <1> - --resource-group <2> ----- -<1> Specifies the name of the disk encryption set. -<2> Specifies the resource group that contains the disk encryption set. -The `id` is in the format of `"/subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/diskEncryptionSets/..."`. -+ -. Obtain the identity of the cluster service principal by running the following command: -+ -[source,terminal] ----- -$ az identity show -g \// <1> - -n \// <2> - --query principalId --out tsv ----- -<1> Specifies the name of the cluster resource group created by the installation program. -<2> Specifies the name of the cluster service principal created by the installation program. -The identity is in the format of `12345678-1234-1234-1234-1234567890`. -ifdef::azure-gov[] -. Create a role assignment that grants the cluster service principal `Contributor` privileges to the disk encryption set by running the following command: -+ -[source,terminal] ----- -$ az role assignment create --assignee \// <1> - --role 'Contributor' \// - --scope \// <2> ----- -<1> Specifies the ID of the cluster service principal obtained in the previous step. -<2> Specifies the ID of the disk encryption set. -endif::azure-gov[] -ifdef::azure-public[] -. Create a role assignment that grants the cluster service principal necessary privileges to the disk encryption set by running the following command: -+ -[source,terminal] ----- -$ az role assignment create --assignee \// <1> - --role \// <2> - --scope \// <3> ----- -<1> Specifies the ID of the cluster service principal obtained in the previous step. -<2> Specifies the Azure role name. You can use the `Contributor` role or a custom role with the necessary permissions. -<3> Specifies the ID of the disk encryption set. -endif::azure-public[] -+ -. Create a storage class that uses the user-managed disk encryption set: -.. Save the following storage class definition to a file, for example `storage-class-definition.yaml`: -+ -[source,yaml] ----- -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: managed-premium -provisioner: kubernetes.io/azure-disk -parameters: - skuname: Premium_LRS - kind: Managed - diskEncryptionSetID: "" <1> - resourceGroup: "" <2> -reclaimPolicy: Delete -allowVolumeExpansion: true -volumeBindingMode: WaitForFirstConsumer ----- -<1> Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example `"/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx"`. -<2> Specifies the name of the resource group used by the installer. This is the same resource group from the first step. -.. Create the storage class `managed-premium` from the file you created by running the following command: -+ -[source,terminal] ----- -$ oc create -f storage-class-definition.yaml ----- -. Select the `managed-premium` storage class when you create persistent volumes to use encrypted storage. - - - -ifeval::["{context}" == "installing-azure-customizations"] -:!azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-government-region"] -:!azure-gov: -endif::[] -ifeval::["{context}" == "installing-azure-network-customizations"] -:!azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-private"] -:!azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-vnet"] -:!azure-public: -endif::[] \ No newline at end of file diff --git a/modules/installation-gcp-shared-vpc-ingress.adoc b/modules/installation-gcp-shared-vpc-ingress.adoc deleted file mode 100644 index 38aabec405af..000000000000 --- a/modules/installation-gcp-shared-vpc-ingress.adoc +++ /dev/null @@ -1,49 +0,0 @@ -// File included in the following assemblies: -// * installation/installing_gcp/installing-gcp-shared-vpc.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-gcp-shared-vpc-ingress_{context}"] -= Optional: Adding Ingress DNS records for shared VPC installations -If the public DNS zone exists in a host project outside the project where you installed your cluster, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard `*.apps.{baseDomain}.` or specific records. You can use A, CNAME, and other records per your requirements. - -.Prerequisites -* You completed the installation of {product-title} on GCP into a shared VPC. -* Your public DNS zone exists in a host project separate from the service project that contains your cluster. - -.Procedure -. Verify that the Ingress router has created a load balancer and populated the `EXTERNAL-IP` field by running the following command: -+ -[source,terminal] ----- -$ oc -n openshift-ingress get service router-default ----- -+ -.Example output -[source,terminal] ----- -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 ----- -. Record the external IP address of the router by running the following command: -+ -[source,terminal] ----- -$ oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}' ----- -. Add a record to your GCP public zone with the router's external IP address and the name `*.apps..`. You can use the `gcloud` command-line utility or the GCP web console. -. To add manual records instead of a wildcard record, create entries for each of the cluster's current routes. You can gather these routes by running the following command: -+ -[source,terminal] ----- -$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes ----- -+ -.Example output -[source,terminal] ----- -oauth-openshift.apps.your.cluster.domain.example.com -console-openshift-console.apps.your.cluster.domain.example.com -downloads-openshift-console.apps.your.cluster.domain.example.com -alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com -prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com ----- diff --git a/modules/installation-identify-supported-aws-outposts-instance-types.adoc b/modules/installation-identify-supported-aws-outposts-instance-types.adoc deleted file mode 100644 index fbb9ee8ee8b7..000000000000 --- a/modules/installation-identify-supported-aws-outposts-instance-types.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Module included in the following assemblies: -// -// installing/installing_aws/installing-aws-outposts-remote-workers.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-identify-supported-aws-outposts-instance-types_{context}"] -= Identifying your AWS Outposts instance types - -AWS Outposts rack catalog includes options supporting the latest generation Intel powered EC2 instance types with or without local instance storage. -Identify which instance types are configured in your AWS Outpost instance. As part of the installation process, you must update the `install-config.yaml` file with the instance type that the installation program will use to deploy worker nodes. - -.Procedure - -Use the AWS CLI to get the list of supported instance types by running the following command: -[source,terminal] ----- -$ aws outposts get-outpost-instance-types --outpost-id <1> ----- -<1> For ``, specify the Outpost ID, used in the AWS account for the worker instances - -+ -[IMPORTANT] -==== -When you purchase capacity for your AWS Outpost instance, you specify an EC2 capacity layout that each server provides. Each server supports a single family of instance types. A layout can offer a single instance type or multiple instance types. Dedicated Hosts allows you to alter whatever you chose for that initial layout. If you allocate a host to support a single instance type for the entire capacity, you can only start a single instance type from that host. -==== - -Supported instance types in AWS Outposts might be changed. For more information, you can check the link:https://aws.amazon.com/outposts/rack/features/#Compute_and_storage[Compute and Storage] page in AWS Outposts documents. diff --git a/modules/installation-localzone-generate-k8s-manifest.adoc b/modules/installation-localzone-generate-k8s-manifest.adoc deleted file mode 100644 index 7243d4225873..000000000000 --- a/modules/installation-localzone-generate-k8s-manifest.adoc +++ /dev/null @@ -1,176 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_aws/installing-aws-localzone.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-localzone-generate-k8s-manifest_{context}"] -= Creating the Kubernetes manifest files - -Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest files that the cluster needs to configure the machines. - -.Prerequisites - -* You obtained the {product-title} installation program. -* You created the `install-config.yaml` installation configuration file. -* You installed the `jq` package. - -.Procedure - -. Change to the directory that contains the {product-title} installation program and generate the Kubernetes manifests for the cluster by running the following command: -+ -[source,terminal] ----- -$ ./openshift-install create manifests --dir <1> ----- -+ -<1> For ``, specify the installation directory that -contains the `install-config.yaml` file you created. - -. Set the default Maximum Transmission Unit (MTU) according to the network plugin: -+ -[IMPORTANT] -==== -Generally, the Maximum Transmission Unit (MTU) between an Amazon EC2 instance in a Local Zone and an Amazon EC2 instance in the Region is 1300. See link:https://docs.aws.amazon.com/local-zones/latest/ug/how-local-zones-work.html[How Local Zones work] in the AWS documentation. -The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead for the OVN-Kubernetes networking plugin is `100 bytes`. - -The network plugin could provide additional features, like IPsec, that also must decrease the MTU. Check the documentation for additional information. - -==== - -.. For the `OVN-Kubernetes` network plugin, enter the following command: -+ -[source,terminal] ----- -$ cat < /manifests/cluster-network-03-config.yml -apiVersion: operator.openshift.io/v1 -kind: Network -metadata: - name: cluster -spec: - defaultNetwork: - ovnKubernetesConfig: - mtu: 1200 -EOF ----- - -. Create the machine set manifests for the worker nodes in your Local Zone. -.. Export a local variable that contains the name of the Local Zone that you opted your AWS account into by running the following command: -+ -[source,terminal] ----- -$ export LZ_ZONE_NAME="" <1> ----- -<1> For ``, specify the Local Zone that you opted your AWS account into, such as `us-east-1-nyc-1a`. - -.. Review the instance types for the location that you will deploy to by running the following command: -+ -[source,terminal] ----- -$ aws ec2 describe-instance-type-offerings \ - --location-type availability-zone \ - --filters Name=location,Values=${LZ_ZONE_NAME} - --region <1> ----- -<1> For ``, specify the name of the region that you will deploy to, such as `us-east-1`. - -.. Export a variable to define the instance type for the worker machines to deploy on the Local Zone subnet by running the following command: -+ -[source,terminal] ----- -$ export INSTANCE_TYPE="" <1> ----- -<1> Set `` to a tested instance type, such as `c5d.2xlarge`. - -.. Store the AMI ID as a local variable by running the following command: -+ -[source,terminal] ----- -$ export AMI_ID=$(grep ami - /openshift/99_openshift-cluster-api_worker-machineset-0.yaml \ - | tail -n1 | awk '{print$2}') ----- - -.. Store the subnet ID as a local variable by running the following command: -+ -[source,terminal] ----- -$ export SUBNET_ID=$(aws cloudformation describe-stacks --stack-name "" \ <1> - | jq -r '.Stacks[0].Outputs[0].OutputValue') ----- -<1> For ``, specify the name of the subnet stack that you created. - -.. Store the cluster ID as local variable by running the following command: -+ -[source,terminal] ----- -$ export CLUSTER_ID="$(awk '/infrastructureName: / {print $2}' /manifests/cluster-infrastructure-02-config.yml)" ----- - -.. Create the worker manifest file for the Local Zone that your VPC uses by running the following command: -+ -[source,terminal] ----- -$ cat < /openshift/99_openshift-cluster-api_worker-machineset-nyc1.yaml -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - labels: - machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID} - name: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME} - namespace: openshift-machine-api -spec: - replicas: 1 - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID} - machine.openshift.io/cluster-api-machineset: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME} - template: - metadata: - labels: - machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID} - machine.openshift.io/cluster-api-machine-role: edge - machine.openshift.io/cluster-api-machine-type: edge - machine.openshift.io/cluster-api-machineset: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME} - spec: - metadata: - labels: - zone_type: local-zone - zone_group: ${LZ_ZONE_NAME:0:-1} - node-role.kubernetes.io/edge: "" - taints: - - key: node-role.kubernetes.io/edge - effect: NoSchedule - providerSpec: - value: - ami: - id: ${AMI_ID} - apiVersion: machine.openshift.io/v1beta1 - blockDevices: - - ebs: - volumeSize: 120 - volumeType: gp2 - credentialsSecret: - name: aws-cloud-credentials - deviceIndex: 0 - iamInstanceProfile: - id: ${CLUSTER_ID}-worker-profile - instanceType: ${INSTANCE_TYPE} - kind: AWSMachineProviderConfig - placement: - availabilityZone: ${LZ_ZONE_NAME} - region: ${CLUSTER_REGION} - securityGroups: - - filters: - - name: tag:Name - values: - - ${CLUSTER_ID}-worker-sg - subnet: - id: ${SUBNET_ID} - publicIp: true - tags: - - name: kubernetes.io/cluster/${CLUSTER_ID} - value: owned - userDataSecret: - name: worker-user-data -EOF ----- diff --git a/modules/installation-prereq-aws-private-cluster.adoc b/modules/installation-prereq-aws-private-cluster.adoc deleted file mode 100644 index 570d0bd4ba2a..000000000000 --- a/modules/installation-prereq-aws-private-cluster.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_aws/installing-aws-government-region.adoc - -[id="installation-prereq-aws-private-cluster_{context}"] -= Installation requirements - -Before you can install the cluster, you must: - -* Provide an existing private AWS VPC and subnets to host the cluster. -+ -Public zones are not supported in Route 53 in AWS GovCloud. As a result, clusters must be private when you deploy to an AWS government region. -* Manually create the installation configuration file (`install-config.yaml`). diff --git a/modules/installation-requirements-user-infra-ibm-z-kvm.adoc b/modules/installation-requirements-user-infra-ibm-z-kvm.adoc deleted file mode 100644 index f35f57823fea..000000000000 --- a/modules/installation-requirements-user-infra-ibm-z-kvm.adoc +++ /dev/null @@ -1,195 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_ibm_z/installing-ibm-z-kvm.adoc - - -:_mod-docs-content-type: CONCEPT -[id="installation-requirements-user-infra_{context}"] -= Machine requirements for a cluster with user-provisioned infrastructure - -For a cluster that contains user-provisioned infrastructure, you must deploy all -of the required machines. - -One or more KVM host machines based on {op-system-base} 8.6 or later. Each {op-system-base} KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each {op-system-base} KVM host machine. - - -[id="machine-requirements_{context}"] -== Required machines - -The smallest {product-title} clusters require the following hosts: - -.Minimum required hosts -[options="header"] -|=== -|Hosts |Description - -|One temporary bootstrap machine -|The cluster requires the bootstrap machine to deploy the {product-title} cluster -on the three control plane machines. You can remove the bootstrap machine after -you install the cluster. -|Three control plane machines -|The control plane machines run the Kubernetes and {product-title} services that form the control plane. - -|At least two compute machines, which are also known as worker machines. -|The workloads requested by {product-title} users run on the compute machines. - -|=== - -[IMPORTANT] -==== -To improve high availability of your cluster, distribute the control plane machines over different {op-system-base} instances on at least two physical machines. -==== - -The bootstrap, control plane, and compute machines must use {op-system-first} as the operating system. - -See link:https://access.redhat.com/articles/rhel-limits[Red Hat Enterprise Linux technology capabilities and limits]. - -[id="network-connectivity_{context}"] -== Network connectivity requirements - -The {product-title} installer creates the Ignition files, which are necessary for all the {op-system-first} virtual machines. The automated installation of {product-title} is performed by the bootstrap machine. It starts the installation of {product-title} on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. - -[id="ibm-z-network-connectivity_{context}"] -== {ibm-z-title} network connectivity requirements - -To install on {ibm-z-name} under {op-system-base} KVM, you need: - -* A {op-system-base} KVM host configured with an OSA or RoCE network adapter. -* Either a {op-system-base} KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. -+ -See link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/configuring_and_managing_virtualization/index#types-of-virtual-machine-network-connections_configuring-virtual-machine-network-connections[Types of virtual machine network connections]. - -[id="host-machine-resource-requirements_{context}"] -== Host machine resource requirements -The {op-system-base} KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the {product-title} environment. See link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/configuring_and_managing_virtualization/index#enabling-virtualization-on-ibm-z_assembly_enabling-virtualization-in-rhel-9[Enabling virtualization on {ibm-z-name}]. - -You can install {product-title} version {product-version} on the following {ibm-name} hardware: - -* {ibm-name} z16 (all models), {ibm-name} z15 (all models), {ibm-name} z14 (all models) -* {ibm-linuxone-name} 4 (all models), {ibm-linuxone-name} III (all models), {ibm-linuxone-name} Emperor II, {ibm-linuxone-name} Rockhopper II - -[id="minimum-ibm-z-system-requirements_{context}"] -== Minimum {ibm-z-title} system environment - - -=== Hardware requirements - -* The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. -* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster. - -[NOTE] -==== -You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of {ibm-z-name}. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster. -==== - -[IMPORTANT] -==== -Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. -==== - - -=== Operating system requirements -* One LPAR running on {op-system-base} 8.6 or later with KVM, which is managed by libvirt - -On your {op-system-base} KVM host, set up: - -* Three guest virtual machines for {product-title} control plane machines -* Two guest virtual machines for {product-title} compute machines -* One guest virtual machine for the temporary {product-title} bootstrap machine - -[id="minimum-resource-requirements_{context}"] -== Minimum resource requirements - -Each cluster virtual machine must meet the following minimum requirements: - -[cols="2,2,2,2,2,2",options="header"] -|=== - -|Virtual Machine -|Operating System -|vCPU ^[1]^ -|Virtual RAM -|Storage -|IOPS - -|Bootstrap -|{op-system} -|4 -|16 GB -|100 GB -|N/A - -|Control plane -|{op-system} -|4 -|16 GB -|100 GB -|N/A - -|Compute -|{op-system} -|2 -|8 GB -|100 GB -|N/A - -|=== -[.small] --- -1. One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. --- - -[id="preferred-ibm-z-system-requirements_{context}"] -== Preferred {ibm-z-title} system environment - - -=== Hardware requirements - -* Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. -* Two network connections to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster. - - -=== Operating system requirements - -* For high availability, two or three LPARs running on {op-system-base} 8.6 or later with KVM, which are managed by libvirt. - -On your {op-system-base} KVM host, set up: - -* Three guest virtual machines for {product-title} control plane machines, distributed across the {op-system-base} KVM host machines. -* At least six guest virtual machines for {product-title} compute machines, distributed across the {op-system-base} KVM host machines. -* One guest virtual machine for the temporary {product-title} bootstrap machine. -* To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using `cpu_shares`. Do the same for infrastructure nodes, if they exist. See link:https://www.ibm.com/docs/en/linux-on-systems?topic=domain-schedinfo[schedinfo] in {ibm-name} Documentation. - -[id="preferred-resource-requirements_{context}"] -== Preferred resource requirements - -The preferred requirements for each cluster virtual machine are: - -[cols="2,2,2,2,2",options="header"] -|=== - -|Virtual Machine -|Operating System -|vCPU -|Virtual RAM -|Storage - -|Bootstrap -|{op-system} -|4 -|16 GB -|120 GB - -|Control plane -|{op-system} -|8 -|16 GB -|120 GB - -|Compute -|{op-system} -|6 -|8 GB -|120 GB - -|=== diff --git a/modules/installing-gitops-operator-in-web-console.adoc b/modules/installing-gitops-operator-in-web-console.adoc deleted file mode 100644 index 099fd68de49e..000000000000 --- a/modules/installing-gitops-operator-in-web-console.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module is included in the following assemblies: -// -// * /cicd/gitops/installing-openshift-gitops.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installing-gitops-operator-in-web-console_{context}"] -= Installing {gitops-title} Operator in web console - -.Procedure - -. Open the *Administrator* perspective of the web console and navigate to *Ecosystem* -> *Software Catalog* in the menu on the left. - -. Search for `OpenShift GitOps`, click the *{gitops-title}* tile, and then click *Install*. -+ -{gitops-title} will be installed in all namespaces of the cluster. - -After the {gitops-title} Operator is installed, it automatically sets up a ready-to-use Argo CD instance that is available in the `openshift-gitops` namespace, and an Argo CD icon is displayed in the console toolbar. -You can create subsequent Argo CD instances for your applications under your projects. diff --git a/modules/installing-gitops-operator-using-cli.adoc b/modules/installing-gitops-operator-using-cli.adoc deleted file mode 100644 index afbcd33c97a6..000000000000 --- a/modules/installing-gitops-operator-using-cli.adoc +++ /dev/null @@ -1,60 +0,0 @@ -// Module is included in the following assemblies: -// -// * /cicd/gitops/installing-openshift-gitops.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installing-gitops-operator-using-cli_{context}"] -= Installing {gitops-title} Operator using CLI - -[role="_abstract"] -You can install {gitops-title} Operator from the software catalog using the CLI. - -.Procedure - -. Create a Subscription object YAML file to subscribe a namespace to the {gitops-title}, for example, `sub.yaml`: -+ -.Example Subscription -[source,yaml] ----- -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: openshift-gitops-operator - namespace: openshift-operators -spec: - channel: latest <1> - installPlanApproval: Automatic - name: openshift-gitops-operator <2> - source: redhat-operators <3> - sourceNamespace: openshift-marketplace <4> ----- -<1> Specify the channel name from where you want to subscribe the Operator. -<2> Specify the name of the Operator to subscribe to. -<3> Specify the name of the CatalogSource that provides the Operator. -<4> The namespace of the CatalogSource. Use `openshift-marketplace` for the default software catalog CatalogSources. -+ -. Apply the `Subscription` to the cluster: -+ -[source,terminal] ----- -$ oc apply -f openshift-gitops-sub.yaml ----- -. After the installation is complete, ensure that all the pods in the `openshift-gitops` namespace are running: -+ -[source,terminal] ----- -$ oc get pods -n openshift-gitops ----- -.Example output -+ -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -cluster-b5798d6f9-zr576 1/1 Running 0 65m -kam-69866d7c48-8nsjv 1/1 Running 0 65m -openshift-gitops-application-controller-0 1/1 Running 0 53m -openshift-gitops-applicationset-controller-6447b8dfdd-5ckgh 1/1 Running 0 65m -openshift-gitops-redis-74bd8d7d96-49bjf 1/1 Running 0 65m -openshift-gitops-repo-server-c999f75d5-l4rsg 1/1 Running 0 65m -openshift-gitops-server-5785f7668b-wj57t 1/1 Running 0 53m ----- diff --git a/modules/machine-feature-agnostic-options-label-gpu-autoscaler.adoc b/modules/machine-feature-agnostic-options-label-gpu-autoscaler.adoc deleted file mode 100644 index 3f891598b564..000000000000 --- a/modules/machine-feature-agnostic-options-label-gpu-autoscaler.adoc +++ /dev/null @@ -1,41 +0,0 @@ -// Module included in the following assemblies: -// - -:_mod-docs-content-type: CONCEPT -[id="machine-feature-agnostic-options-label-gpu-autoscaler_{context}"] -= Cluster autoscaler GPU labels - -You can indicate machines that the cluster autoscaler can deploy GPU-enabled nodes on by adding parameters to a compute machine set custom resource (CR). - -.Sample cluster autoscaler GPU label -[source,yaml] ----- -apiVersion: # <1> -kind: MachineSet -# ... -spec: - template: - spec: - metadata: - labels: - cluster-api/accelerator: # <2> -# ... ----- -<1> Specifies the API group and version of the machine set. -The following values are valid: -`cluster.x-k8s.io/v1beta1`:: The API group and version for Cluster API machine sets. -`machine.openshift.io/v1beta1`:: The API group and version for Machine API machine sets. -<2> Specifies a label to use for GPU-enabled nodes. -The label must use the following format: -+ --- -* Consists of alphanumeric characters, `-`, `_`, or `.`. -* Starts and ends with an alphanumeric character. --- -For example, this value might be `nvidia-t4` to represent Nvidia T4 GPUs, or `nvidia-a10g` for A10G GPUs. -+ -[NOTE] -==== -You must also specify the value of this label for the `spec.resourceLimits.gpus.type` parameter in your `ClusterAutoscaler` CR. -For more information, see "Cluster autoscaler resource definition". -==== \ No newline at end of file diff --git a/modules/machineset-osp-adding-bare-metal.adoc b/modules/machineset-osp-adding-bare-metal.adoc deleted file mode 100644 index bec69cb21cb5..000000000000 --- a/modules/machineset-osp-adding-bare-metal.adoc +++ /dev/null @@ -1,90 +0,0 @@ -:_mod-docs-content-type: PROCEDURE -[id="machineset-osp-adding-bare-metal_{context}"] -= Adding bare-metal compute machines to a {rh-openstack} cluster - -// TODO -// Mothballed -// Reintroduce when feature is available. -You can add bare-metal compute machines to an {product-title} cluster after you deploy it -on {rh-openstack-first}. In this configuration, all machines are attached to an -existing, installer-provisioned network, and traffic between control plane and -compute machines is routed between subnets. - -.Prerequisites - -* The {rh-openstack} link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/bare_metal_provisioning/index[Bare Metal service (Ironic)] is enabled and accessible by using the {rh-openstack} Compute API. - -* Bare metal is available as link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_bare_metal_provisioning_service/assembly_configuring-the-bare-metal-provisioning-service-after-deployment#proc_creating-flavors-for-launching-bare-metal-instances_bare-metal-post-deployment[an {rh-openstack} flavor]. - -* You deployed an {product-title} cluster on installer-provisioned infrastructure. - -* Your {rh-openstack} cloud provider is configured to route traffic between the installer-created VM -subnet and the pre-existing bare metal subnet. - -.Procedure -. Create a file called `baremetalMachineSet.yaml`, and then add the bare metal flavor to it: -+ -FIXME: May require update before publication. -.A sample bare metal MachineSet file -[source,yaml] ----- -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - labels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machine-role: - machine.openshift.io/cluster-api-machine-type: - name: - - namespace: openshift-machine-api -spec: - replicas: - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machineset: - - template: - metadata: - labels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machine-role: - machine.openshift.io/cluster-api-machine-type: - machine.openshift.io/cluster-api-machineset: - - spec: - providerSpec: - value: - apiVersion: openstackproviderconfig.openshift.io/v1alpha1 - cloudName: openstack - cloudsSecret: - name: openstack-cloud-credentials - namespace: openshift-machine-api - flavor: - image: - kind: OpenstackProviderSpec - networks: - - filter: {} - subnets: - - filter: - name: - tags: openshiftClusterID= - securityGroups: - - filter: {} - name: - - serverMetadata: - Name: - - openshiftClusterID: - tags: - - openshiftClusterID= - trunk: true - userDataSecret: - name: -user-data ----- - -. On a command line, to create the MachineSet resource, type: -+ -[source,terminal] ----- -$ oc create -v baremetalMachineSet.yaml ----- - -You can now use bare-metal compute machines in your {product-title} cluster. diff --git a/modules/metering-cluster-capacity-examples.adoc b/modules/metering-cluster-capacity-examples.adoc deleted file mode 100644 index 3bc78d3e5d22..000000000000 --- a/modules/metering-cluster-capacity-examples.adoc +++ /dev/null @@ -1,48 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-usage-examples.adoc - -[id="metering-cluster-capacity-examples_{context}"] -= Measure cluster capacity hourly and daily - -The following report demonstrates how to measure cluster capacity both hourly and daily. The daily report works by aggregating the hourly report's results. - -The following report measures cluster CPU capacity every hour. - -.Hourly CPU capacity by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-capacity-hourly -spec: - query: "cluster-cpu-capacity" - schedule: - period: "hourly" <1> ----- -<1> You could change this period to `daily` to get a daily report, but with larger data sets it is more efficient to use an hourly report, then aggregate your hourly data into a daily report. - -The following report aggregates the hourly data into a daily report. - -.Daily CPU capacity by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-capacity-daily <1> -spec: - query: "cluster-cpu-capacity" <2> - inputs: <3> - - name: ClusterCpuCapacityReportName - value: cluster-cpu-capacity-hourly - schedule: - period: "daily" ----- - -<1> To stay organized, remember to change the `name` of your report if you change any of the other values. -<2> You can also measure `cluster-memory-capacity`. Remember to update the query in the associated hourly report as well. -<3> The `inputs` section configures this report to aggregate the hourly report. Specifically, `value: cluster-cpu-capacity-hourly` is the name of the hourly report that gets aggregated. diff --git a/modules/metering-cluster-usage-examples.adoc b/modules/metering-cluster-usage-examples.adoc deleted file mode 100644 index ed6188e8ca3b..000000000000 --- a/modules/metering-cluster-usage-examples.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-usage-examples.adoc - -[id="metering-cluster-usage-examples_{context}"] -= Measure cluster usage with a one-time report - -The following report measures cluster usage from a specific starting date forward. The report only runs once, after you save it and apply it. - -.CPU usage by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-usage-2020 <1> -spec: - reportingStart: '2020-01-01T00:00:00Z' <2> - reportingEnd: '2020-12-30T23:59:59Z' - query: cluster-cpu-usage <3> - runImmediately: true <4> ----- -<1> To stay organized, remember to change the `name` of your report if you change any of the other values. -<2> Configures the report to start using data from the `reportingStart` timestamp until the `reportingEnd` timestamp. -<3> Adjust your query here. You can also measure cluster usage with the `cluster-memory-usage` query. -<4> Configures the report to run immediately after saving it and applying it. diff --git a/modules/metering-cluster-utilization-examples.adoc b/modules/metering-cluster-utilization-examples.adoc deleted file mode 100644 index 4c1856b5217f..000000000000 --- a/modules/metering-cluster-utilization-examples.adoc +++ /dev/null @@ -1,26 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-usage-examples.adoc - -[id="metering-cluster-utilization-examples_{context}"] -= Measure cluster utilization using cron expressions - -You can also use cron expressions when configuring the period of your reports. The following report measures cluster utilization by looking at CPU utilization from 9am-5pm every weekday. - -.Weekday CPU utilization by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-utilization-weekdays <1> -spec: - query: "cluster-cpu-utilization" <2> - schedule: - period: "cron" - expression: 0 0 * * 1-5 <3> ----- -<1> To say organized, remember to change the `name` of your report if you change any of the other values. -<2> Adjust your query here. You can also measure cluster utilization with the `cluster-memory-utilization` query. -<3> For cron periods, normal cron expressions are valid. diff --git a/modules/metering-configure-persistentvolumes.adoc b/modules/metering-configure-persistentvolumes.adoc deleted file mode 100644 index 418782ec8b2b..000000000000 --- a/modules/metering-configure-persistentvolumes.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-hive-metastore.adoc - -[id="metering-configure-persistentvolumes_{context}"] -= Configuring persistent volumes - -By default, Hive requires one persistent volume to operate. - -`hive-metastore-db-data` is the main persistent volume claim (PVC) required by default. This PVC is used by the Hive metastore to store metadata about tables, such as table name, columns, and location. Hive metastore is used by Presto and the Hive server to look up table metadata when processing queries. You remove this requirement by using MySQL or PostgreSQL for the Hive metastore database. - -To install, Hive metastore requires that dynamic volume provisioning is enabled in a storage class, a persistent volume of the correct size must be manually pre-created, or you use a pre-existing MySQL or PostgreSQL database. - -[id="metering-configure-persistentvolumes-storage-class-hive_{context}"] -== Configuring the storage class for the Hive metastore -To configure and specify a storage class for the `hive-metastore-db-data` persistent volume claim, specify the storage class in your `MeteringConfig` custom resource. An example `storage` section with the `class` field is included in the `metastore-storage.yaml` file below. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - hive: - spec: - metastore: - storage: - # Default is null, which means using the default storage class if it exists. - # If you wish to use a different storage class, specify it here - # class: "null" <1> - size: "5Gi" ----- -<1> Uncomment this line and replace `null` with the name of the storage class to use. Leaving the value `null` will cause metering to use the default storage class for the cluster. - -[id="metering-configure-persistentvolumes-volume-size-hive_{context}"] -== Configuring the volume size for the Hive metastore - -Use the `metastore-storage.yaml` file below as a template to configure the volume size for the Hive metastore. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - hive: - spec: - metastore: - storage: - # Default is null, which means using the default storage class if it exists. - # If you wish to use a different storage class, specify it here - # class: "null" - size: "5Gi" <1> ----- -<1> Replace the value for `size` with your desired capacity. The example file shows "5Gi". diff --git a/modules/metering-debugging.adoc b/modules/metering-debugging.adoc deleted file mode 100644 index 4f55c188568d..000000000000 --- a/modules/metering-debugging.adoc +++ /dev/null @@ -1,228 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-troubleshooting-debugging.adoc - -[id="metering-debugging_{context}"] -= Debugging metering - -Debugging metering is much easier when you interact directly with the various components. The sections below detail how you can connect and query Presto and Hive as well as view the dashboards of the Presto and HDFS components. - -[NOTE] -==== -All of the commands in this section assume you have installed metering through the software catalog in the `openshift-metering` namespace. -==== - -[id="metering-get-reporting-operator-logs_{context}"] -== Get reporting operator logs -Use the command below to follow the logs of the `reporting-operator`: - -[source,terminal] ----- -$ oc -n openshift-metering logs -f "$(oc -n openshift-metering get pods -l app=reporting-operator -o name | cut -c 5-)" -c reporting-operator ----- - -[id="metering-query-presto-using-presto-cli_{context}"] -== Query Presto using presto-cli -The following command opens an interactive presto-cli session where you can query Presto. This session runs in the same container as Presto and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Presto pod. - -By default, Presto is configured to communicate using TLS. You must use the following command to run Presto queries: - -[source,terminal] ----- -$ oc -n openshift-metering exec -it "$(oc -n openshift-metering get pods -l app=presto,presto=coordinator -o name | cut -d/ -f2)" \ - -- /usr/local/bin/presto-cli --server https://presto:8080 --catalog hive --schema default --user root --keystore-path /opt/presto/tls/keystore.pem ----- - -Once you run this command, a prompt appears where you can run queries. Use the `show tables from metering;` query to view the list of tables: - -[source,terminal] ----- -$ presto:default> show tables from metering; ----- - -.Example output -[source,terminal] ----- - Table - - datasource_your_namespace_cluster_cpu_capacity_raw - datasource_your_namespace_cluster_cpu_usage_raw - datasource_your_namespace_cluster_memory_capacity_raw - datasource_your_namespace_cluster_memory_usage_raw - datasource_your_namespace_node_allocatable_cpu_cores - datasource_your_namespace_node_allocatable_memory_bytes - datasource_your_namespace_node_capacity_cpu_cores - datasource_your_namespace_node_capacity_memory_bytes - datasource_your_namespace_node_cpu_allocatable_raw - datasource_your_namespace_node_cpu_capacity_raw - datasource_your_namespace_node_memory_allocatable_raw - datasource_your_namespace_node_memory_capacity_raw - datasource_your_namespace_persistentvolumeclaim_capacity_bytes - datasource_your_namespace_persistentvolumeclaim_capacity_raw - datasource_your_namespace_persistentvolumeclaim_phase - datasource_your_namespace_persistentvolumeclaim_phase_raw - datasource_your_namespace_persistentvolumeclaim_request_bytes - datasource_your_namespace_persistentvolumeclaim_request_raw - datasource_your_namespace_persistentvolumeclaim_usage_bytes - datasource_your_namespace_persistentvolumeclaim_usage_raw - datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw - datasource_your_namespace_pod_cpu_request_raw - datasource_your_namespace_pod_cpu_usage_raw - datasource_your_namespace_pod_limit_cpu_cores - datasource_your_namespace_pod_limit_memory_bytes - datasource_your_namespace_pod_memory_request_raw - datasource_your_namespace_pod_memory_usage_raw - datasource_your_namespace_pod_persistentvolumeclaim_request_info - datasource_your_namespace_pod_request_cpu_cores - datasource_your_namespace_pod_request_memory_bytes - datasource_your_namespace_pod_usage_cpu_cores - datasource_your_namespace_pod_usage_memory_bytes -(32 rows) - -Query 20210503_175727_00107_3venm, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [32 rows, 2.23KB] [19 rows/s, 1.37KB/s] - -presto:default> ----- - -[id="metering-query-hive-using-beeline_{context}"] -== Query Hive using beeline -The following opens an interactive beeline session where you can query Hive. This session runs in the same container as Hive and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Hive pod. - -[source,terminal] ----- -$ oc -n openshift-metering exec -it $(oc -n openshift-metering get pods -l app=hive,hive=server -o name | cut -d/ -f2) \ - -c hiveserver2 -- beeline -u 'jdbc:hive2://127.0.0.1:10000/default;auth=noSasl' ----- - -Once you run this command, a prompt appears where you can run queries. Use the `show tables;` query to view the list of tables: - -[source,terminal] ----- -$ 0: jdbc:hive2://127.0.0.1:10000/default> show tables from metering; ----- - -.Example output -[source,terminal] ----- -+----------------------------------------------------+ -| tab_name | -+----------------------------------------------------+ -| datasource_your_namespace_cluster_cpu_capacity_raw | -| datasource_your_namespace_cluster_cpu_usage_raw | -| datasource_your_namespace_cluster_memory_capacity_raw | -| datasource_your_namespace_cluster_memory_usage_raw | -| datasource_your_namespace_node_allocatable_cpu_cores | -| datasource_your_namespace_node_allocatable_memory_bytes | -| datasource_your_namespace_node_capacity_cpu_cores | -| datasource_your_namespace_node_capacity_memory_bytes | -| datasource_your_namespace_node_cpu_allocatable_raw | -| datasource_your_namespace_node_cpu_capacity_raw | -| datasource_your_namespace_node_memory_allocatable_raw | -| datasource_your_namespace_node_memory_capacity_raw | -| datasource_your_namespace_persistentvolumeclaim_capacity_bytes | -| datasource_your_namespace_persistentvolumeclaim_capacity_raw | -| datasource_your_namespace_persistentvolumeclaim_phase | -| datasource_your_namespace_persistentvolumeclaim_phase_raw | -| datasource_your_namespace_persistentvolumeclaim_request_bytes | -| datasource_your_namespace_persistentvolumeclaim_request_raw | -| datasource_your_namespace_persistentvolumeclaim_usage_bytes | -| datasource_your_namespace_persistentvolumeclaim_usage_raw | -| datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw | -| datasource_your_namespace_pod_cpu_request_raw | -| datasource_your_namespace_pod_cpu_usage_raw | -| datasource_your_namespace_pod_limit_cpu_cores | -| datasource_your_namespace_pod_limit_memory_bytes | -| datasource_your_namespace_pod_memory_request_raw | -| datasource_your_namespace_pod_memory_usage_raw | -| datasource_your_namespace_pod_persistentvolumeclaim_request_info | -| datasource_your_namespace_pod_request_cpu_cores | -| datasource_your_namespace_pod_request_memory_bytes | -| datasource_your_namespace_pod_usage_cpu_cores | -| datasource_your_namespace_pod_usage_memory_bytes | -+----------------------------------------------------+ -32 rows selected (13.101 seconds) -0: jdbc:hive2://127.0.0.1:10000/default> ----- - -[id="metering-port-forward-hive-web-ui_{context}"] -== Port-forward to the Hive web UI -Run the following command to port-forward to the Hive web UI: - -[source,terminal] ----- -$ oc -n openshift-metering port-forward hive-server-0 10002 ----- - -You can now open http://127.0.0.1:10002 in your browser window to view the Hive web interface. - -[id="metering-port-forward-hdfs_{context}"] -== Port-forward to HDFS -Run the following command to port-forward to the HDFS namenode: - -[source,terminal] ----- -$ oc -n openshift-metering port-forward hdfs-namenode-0 9870 ----- - -You can now open http://127.0.0.1:9870 in your browser window to view the HDFS web interface. - -Run the following command to port-forward to the first HDFS datanode: - -[source,terminal] ----- -$ oc -n openshift-metering port-forward hdfs-datanode-0 9864 <1> ----- -<1> To check other datanodes, replace `hdfs-datanode-0` with the pod you want to view information on. - -[id="metering-ansible-operator_{context}"] -== Metering Ansible Operator -Metering uses the Ansible Operator to watch and reconcile resources in a cluster environment. When debugging a failed metering installation, it can be helpful to view the Ansible logs or status of your `MeteringConfig` custom resource. - -[id="metering-accessing-ansible-logs_{context}"] -=== Accessing Ansible logs -In the default installation, the Metering Operator is deployed as a pod. In this case, you can check the logs of the Ansible container within this pod: - -[source,terminal] ----- -$ oc -n openshift-metering logs $(oc -n openshift-metering get pods -l app=metering-operator -o name | cut -d/ -f2) -c ansible ----- - -Alternatively, you can view the logs of the Operator container (replace `-c ansible` with `-c operator`) for condensed output. - -[id="metering-checking-meteringconfig-status_{context}"] -=== Checking the MeteringConfig Status -It can be helpful to view the `.status` field of your `MeteringConfig` custom resource to debug any recent failures. The following command shows status messages with type `Invalid`: - -[source,terminal] ----- -$ oc -n openshift-metering get meteringconfig operator-metering -o=jsonpath='{.status.conditions[?(@.type=="Invalid")].message}' ----- -// $ oc -n openshift-metering get meteringconfig operator-metering -o json | jq '.status' - -[id="metering-checking-meteringconfig-events_{context}"] -=== Checking MeteringConfig Events -Check events that the Metering Operator is generating. This can be helpful during installation or upgrade to debug any resource failures. Sort events by the last timestamp: - -[source,terminal] ----- -$ oc -n openshift-metering get events --field-selector involvedObject.kind=MeteringConfig --sort-by='.lastTimestamp' ----- - -.Example output with latest changes in the MeteringConfig resources -[source,terminal] ----- -LAST SEEN TYPE REASON OBJECT MESSAGE -4m40s Normal Validating meteringconfig/operator-metering Validating the user-provided configuration -4m30s Normal Started meteringconfig/operator-metering Configuring storage for the metering-ansible-operator -4m26s Normal Started meteringconfig/operator-metering Configuring TLS for the metering-ansible-operator -3m58s Normal Started meteringconfig/operator-metering Configuring reporting for the metering-ansible-operator -3m53s Normal Reconciling meteringconfig/operator-metering Reconciling metering resources -3m47s Normal Reconciling meteringconfig/operator-metering Reconciling monitoring resources -3m41s Normal Reconciling meteringconfig/operator-metering Reconciling HDFS resources -3m23s Normal Reconciling meteringconfig/operator-metering Reconciling Hive resources -2m59s Normal Reconciling meteringconfig/operator-metering Reconciling Presto resources -2m35s Normal Reconciling meteringconfig/operator-metering Reconciling reporting-operator resources -2m14s Normal Reconciling meteringconfig/operator-metering Reconciling reporting resources ----- diff --git a/modules/metering-exposing-the-reporting-api.adoc b/modules/metering-exposing-the-reporting-api.adoc deleted file mode 100644 index a40ed2d312fc..000000000000 --- a/modules/metering-exposing-the-reporting-api.adoc +++ /dev/null @@ -1,159 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-reporting-operator.adoc - -[id="metering-exposing-the-reporting-api_{context}"] -= Exposing the reporting API - -On {product-title} the default metering installation automatically exposes a route, making the reporting API available. This provides the following features: - -* Automatic DNS -* Automatic TLS based on the cluster CA - -Also, the default installation makes it possible to use the {product-title} service for serving certificates to protect the reporting API with TLS. The {product-title} OAuth proxy is deployed as a sidecar container for the Reporting Operator, which protects the reporting API with authentication. - -[id="metering-openshift-authentication_{context}"] -== Using {product-title} Authentication - -By default, the reporting API is secured with TLS and authentication. This is done by configuring the Reporting Operator to deploy a pod containing both the Reporting Operator's container, and a sidecar container running {product-title} auth-proxy. - -To access the reporting API, the Metering Operator exposes a route. After that route has been installed, you can run the following command to get the route's hostname. - -[source,terminal] ----- -$ METERING_ROUTE_HOSTNAME=$(oc -n openshift-metering get routes metering -o json | jq -r '.status.ingress[].host') ----- - -Next, set up authentication using either a service account token or basic authentication with a username and password. - -[id="metering-authenticate-using-service-account_{context}"] -=== Authenticate using a service account token -With this method, you use the token in the Reporting Operator's service account, and pass that bearer token to the Authorization header in the following command: - -[source,terminal] ----- -$ TOKEN=$(oc -n openshift-metering serviceaccounts get-token reporting-operator) -curl -H "Authorization: Bearer $TOKEN" -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]" ----- - -Be sure to replace the `name=[Report Name]` and `format=[Format]` parameters in the URL above. The `format` parameter can be json, csv, or tabular. - -[id="metering-authenticate-using-username-password_{context}"] -=== Authenticate using a username and password - -Metering supports configuring basic authentication using a username and password combination, which is specified in the contents of an htpasswd file. By default, a secret containing empty htpasswd data is created. You can, however, configure the `reporting-operator.spec.authProxy.htpasswd.data` and `reporting-operator.spec.authProxy.htpasswd.createSecret` keys to use this method. - -Once you have specified the above in your `MeteringConfig` resource, you can run the following command: - -[source,terminal] ----- -$ curl -u testuser:password123 -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]" ----- - -Be sure to replace `testuser:password123` with a valid username and password combination. - -[id="metering-manually-configure-authentication_{context}"] -== Manually Configuring Authentication -To manually configure, or disable OAuth in the Reporting Operator, you must set `spec.tls.enabled: false` in your `MeteringConfig` resource. - -[WARNING] -==== -This also disables all TLS and authentication between the Reporting Operator, Presto, and Hive. You would need to manually configure these resources yourself. -==== - -Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the {product-title} auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API is not exposed directly, but instead is proxied to via the auth-proxy sidecar container. - -* `reporting-operator.spec.authProxy.enabled` -* `reporting-operator.spec.authProxy.cookie.createSecret` -* `reporting-operator.spec.authProxy.cookie.seed` - -You need to set `reporting-operator.spec.authProxy.enabled` and `reporting-operator.spec.authProxy.cookie.createSecret` to `true` and `reporting-operator.spec.authProxy.cookie.seed` to a 32-character random string. - -You can generate a 32-character random string using the following command. - -[source,terminal] ----- -$ openssl rand -base64 32 | head -c32; echo. ----- - -[id="metering-token-authentication_{context}"] -=== Token authentication - -When the following options are set to `true`, authentication using a bearer token is enabled for the reporting REST API. Bearer tokens can come from service accounts or users. - -* `reporting-operator.spec.authProxy.subjectAccessReview.enabled` -* `reporting-operator.spec.authProxy.delegateURLs.enabled` - -When authentication is enabled, the Bearer token used to query the reporting API of the user or service account must be granted access using one of the following roles: - -* report-exporter -* reporting-admin -* reporting-viewer -* metering-admin -* metering-viewer - -The Metering Operator is capable of creating role bindings for you, granting these permissions by specifying a list of subjects in the `spec.permissions` section. For an example, see the following `advanced-auth.yaml` example configuration. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - permissions: - # anyone in the "metering-admins" group can create, update, delete, etc any - # metering.openshift.io resources in the namespace. - # This also grants permissions to get query report results from the reporting REST API. - meteringAdmins: - - kind: Group - name: metering-admins - # Same as above except read only access and for the metering-viewers group. - meteringViewers: - - kind: Group - name: metering-viewers - # the default serviceaccount in the namespace "my-custom-ns" can: - # create, update, delete, etc reports. - # This also gives permissions query the results from the reporting REST API. - reportingAdmins: - - kind: ServiceAccount - name: default - namespace: my-custom-ns - # anyone in the group reporting-readers can get, list, watch reports, and - # query report results from the reporting REST API. - reportingViewers: - - kind: Group - name: reporting-readers - # anyone in the group cluster-admins can query report results - # from the reporting REST API. So can the user bob-from-accounting. - reportExporters: - - kind: Group - name: cluster-admins - - kind: User - name: bob-from-accounting - - reporting-operator: - spec: - authProxy: - # htpasswd.data can contain htpasswd file contents for allowing auth - # using a static list of usernames and their password hashes. - # - # username is 'testuser' password is 'password123' - # generated htpasswdData using: `htpasswd -nb -s testuser password123` - # htpasswd: - # data: | - # testuser:{SHA}y/2sYAj5yrQIN4TL0YdPdmGNKpc= - # - # change REPLACEME to the output of your htpasswd command - htpasswd: - data: | - REPLACEME ----- - -Alternatively, you can use any role which has rules granting `get` permissions to `reports/export`. This means `get` access to the `export` sub-resource of the `Report` resources in the namespace of the Reporting Operator. For example: `admin` and `cluster-admin`. - -By default, the Reporting Operator and Metering Operator service accounts both have these permissions, and their tokens can be used for authentication. - -[id="metering-basic-authentication_{context}"] -=== Basic authentication with a username and password -For basic authentication you can supply a username and password in the `reporting-operator.spec.authProxy.htpasswd.data` field. The username and password must be the same format as those found in an htpasswd file. When set, you can use HTTP basic authentication to provide your username and password that has a corresponding entry in the `htpasswdData` contents. diff --git a/modules/metering-install-operator.adoc b/modules/metering-install-operator.adoc deleted file mode 100644 index 2417a72c53a6..000000000000 --- a/modules/metering-install-operator.adoc +++ /dev/null @@ -1,133 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-install-operator_{context}"] -= Installing the Metering Operator - -You can install metering by deploying the Metering Operator. The Metering Operator creates and manages the components of the metering stack. - -[NOTE] -==== -You cannot create a project starting with `openshift-` using the web console or by using the `oc new-project` command in the CLI. -==== - -[NOTE] -==== -If the Metering Operator is installed using a namespace other than `openshift-metering`, the metering reports are only viewable using the CLI. It is strongly suggested throughout the installation steps to use the `openshift-metering` namespace. -==== - -[id="metering-install-web-console_{context}"] -== Installing metering using the web console -You can use the {product-title} web console to install the Metering Operator. - -.Procedure - -. Create a namespace object YAML file for the Metering Operator with the `oc create -f .yaml` command. You must use the CLI to create the namespace. For example, `metering-namespace.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-metering <1> - annotations: - openshift.io/node-selector: "" <2> - labels: - openshift.io/cluster-monitoring: "true" ----- -<1> It is strongly recommended to deploy metering in the `openshift-metering` namespace. -<2> Include this annotation before configuring specific node selectors for the operand pods. - -. In the {product-title} web console, click *Ecosystem* -> *Software Catalog*. Filter for `metering` to find the Metering Operator. - -. Click the *Metering* card, review the package description, and then click *Install*. -. Select an *Update Channel*, *Installation Mode*, and *Approval Strategy*. -. Click *Install*. - -. Verify that the Metering Operator is installed by switching to the *Ecosystem* -> *Installed Operators* page. The Metering Operator has a *Status* of *Succeeded* when the installation is complete. -+ -[NOTE] -==== -It might take several minutes for the Metering Operator to appear. -==== - -. Click *Metering* on the *Installed Operators* page for Operator *Details*. From the *Details* page you can create different resources related to metering. - -To complete the metering installation, create a `MeteringConfig` resource to configure metering and install the components of the metering stack. - -[id="metering-install-cli_{context}"] -== Installing metering using the CLI - -You can use the {product-title} CLI to install the Metering Operator. - -.Procedure - -. Create a `Namespace` object YAML file for the Metering Operator. You must use the CLI to create the namespace. For example, `metering-namespace.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-metering <1> - annotations: - openshift.io/node-selector: "" <2> - labels: - openshift.io/cluster-monitoring: "true" ----- -<1> It is strongly recommended to deploy metering in the `openshift-metering` namespace. -<2> Include this annotation before configuring specific node selectors for the operand pods. - -. Create the `Namespace` object: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- -+ -For example: -+ -[source,terminal] ----- -$ oc create -f openshift-metering.yaml ----- - -. Create the `OperatorGroup` object YAML file. For example, `metering-og`: -+ -[source,yaml] ----- -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: openshift-metering <1> - namespace: openshift-metering <2> -spec: - targetNamespaces: - - openshift-metering ----- -<1> The name is arbitrary. -<2> Specify the `openshift-metering` namespace. - -. Create a `Subscription` object YAML file to subscribe a namespace to the Metering Operator. This object targets the most recently released version in the `redhat-operators` catalog source. For example, `metering-sub.yaml`: -+ -[source,yaml, subs="attributes+"] ----- -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: metering-ocp <1> - namespace: openshift-metering <2> -spec: - channel: "{product-version}" <3> - source: "redhat-operators" <4> - sourceNamespace: "openshift-marketplace" - name: "metering-ocp" - installPlanApproval: "Automatic" <5> ----- -<1> The name is arbitrary. -<2> You must specify the `openshift-metering` namespace. -<3> Specify {product-version} as the channel. -<4> Specify the `redhat-operators` catalog source, which contains the `metering-ocp` package manifests. If your {product-title} is installed on a restricted network, also known as a disconnected cluster, specify the name of the `CatalogSource` object you created when you configured the Operator LifeCycle Manager (OLM). -<5> Specify "Automatic" install plan approval. diff --git a/modules/metering-install-prerequisites.adoc b/modules/metering-install-prerequisites.adoc deleted file mode 100644 index 293f9f55b897..000000000000 --- a/modules/metering-install-prerequisites.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adoc - -[id="metering-install-prerequisites_{context}"] -= Prerequisites - -Metering requires the following components: - -* A `StorageClass` resource for dynamic volume provisioning. Metering supports a number of different storage solutions. -* 4GB memory and 4 CPU cores available cluster capacity and at least one node with 2 CPU cores and 2GB memory capacity available. -* The minimum resources needed for the largest single pod installed by metering are 2GB of memory and 2 CPU cores. -** Memory and CPU consumption may often be lower, but will spike when running reports, or collecting data for larger clusters. diff --git a/modules/metering-install-verify.adoc b/modules/metering-install-verify.adoc deleted file mode 100644 index b5ade5db2e56..000000000000 --- a/modules/metering-install-verify.adoc +++ /dev/null @@ -1,95 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adc - -[id="metering-install-verify_{context}"] -= Verifying the metering installation - -You can verify the metering installation by performing any of the following checks: - -* Check the Metering Operator `ClusterServiceVersion` (CSV) resource for the metering version. This can be done through either the web console or CLI. -+ --- -.Procedure (UI) - . Navigate to *Ecosystem* -> *Installed Operators* in the `openshift-metering` namespace. - . Click *Metering Operator*. - . Click *Subscription* for *Subscription Details*. - . Check the *Installed Version*. - -.Procedure (CLI) -* Check the Metering Operator CSV in the `openshift-metering` namespace: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering get csv ----- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -NAME DISPLAY VERSION REPLACES PHASE -elasticsearch-operator.{product-version}.0-202006231303.p0 OpenShift Elasticsearch Operator {product-version}.0-202006231303.p0 Succeeded -metering-operator.v{product-version}.0 Metering {product-version}.0 Succeeded ----- --- - -* Check that all required pods in the `openshift-metering` namespace are created. This can be done through either the web console or CLI. -+ --- -[NOTE] -==== -Many pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator installation. -==== - -.Procedure (UI) -* Navigate to *Workloads* -> *Pods* in the metering namespace and verify that pods are being created. This can take several minutes after installing the metering stack. - -.Procedure (CLI) -* Check that all required pods in the `openshift-metering` namespace are created: -+ -[source,terminal] ----- -$ oc -n openshift-metering get pods ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -hive-metastore-0 2/2 Running 0 3m28s -hive-server-0 3/3 Running 0 3m28s -metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s -presto-coordinator-0 2/2 Running 0 3m9s -reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s ----- --- - -* Verify that the `ReportDataSource` resources are beginning to import data, indicated by a valid timestamp in the `EARLIEST METRIC` column. This might take several minutes. Filter out the "-raw" `ReportDataSource` resources, which do not import data: -+ -[source,terminal] ----- -$ oc get reportdatasources -n openshift-metering | grep -v raw ----- -+ -.Example output -[source,terminal] ----- -NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE -node-allocatable-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T18:54:45Z 9m50s -node-allocatable-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T18:54:45Z 9m50s -node-capacity-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:39Z 9m50s -node-capacity-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T18:54:44Z 9m50s -persistentvolumeclaim-capacity-bytes 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:43Z 9m50s -persistentvolumeclaim-phase 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:28Z 9m50s -persistentvolumeclaim-request-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:34Z 9m50s -persistentvolumeclaim-usage-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:36Z 9m49s -pod-limit-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:26Z 9m49s -pod-limit-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:30Z 9m49s -pod-persistentvolumeclaim-request-info 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:37Z 9m49s -pod-request-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T18:54:24Z 9m49s -pod-request-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:32Z 9m49s -pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T18:54:10Z 9m49s -pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s ----- - -After all pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster. diff --git a/modules/metering-overview.adoc b/modules/metering-overview.adoc deleted file mode 100644 index abb20ed8c452..000000000000 --- a/modules/metering-overview.adoc +++ /dev/null @@ -1,33 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adoc -// * metering/metering-using-metering.adoc - -[id="metering-overview_{context}"] -= Metering overview - -Metering is a general purpose data analysis tool that enables you to write reports to process data from different data sources. As a cluster administrator, you can use metering to analyze what is happening in your cluster. You can either write your own, or use predefined SQL queries to define how you want to process data from the different data sources you have available. - -Metering focuses primarily on in-cluster metric data using Prometheus as a default data source, enabling users of metering to do reporting on pods, namespaces, and most other Kubernetes resources. - -You can install metering on {product-title} 4.x clusters and above. - -[id="metering-resources_{context}"] -== Metering resources - -Metering has many resources which can be used to manage the deployment and installation of metering, as well as the reporting functionality metering provides. - -Metering is managed using the following custom resource definitions (CRDs): - -[cols="1,7"] -|=== - -|*MeteringConfig* |Configures the metering stack for deployment. Contains customizations and configuration options to control each component that makes up the metering stack. - -|*Report* |Controls what query to use, when, and how often the query should be run, and where to store the results. - -|*ReportQuery* |Contains the SQL queries used to perform analysis on the data contained within `ReportDataSource` resources. - -|*ReportDataSource* |Controls the data available to `ReportQuery` and `Report` resources. Allows configuring access to different databases for use within metering. - -|=== diff --git a/modules/metering-prometheus-connection.adoc b/modules/metering-prometheus-connection.adoc deleted file mode 100644 index b713fd4ff17d..000000000000 --- a/modules/metering-prometheus-connection.adoc +++ /dev/null @@ -1,55 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-reporting-operator.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-prometheus-connection_{context}"] -= Securing a Prometheus connection - -When you install metering on {product-title}, Prometheus is available at https://prometheus-k8s.openshift-monitoring.svc:9091/. - -To secure the connection to Prometheus, the default metering installation uses the {product-title} certificate authority (CA). If your Prometheus instance uses a different CA, you can inject the CA through a config map. You can also configure the Reporting Operator to use a specified bearer token to authenticate with Prometheus. - -.Procedure - -* Inject the CA that your Prometheus instance uses through a config map. For example: -+ -[source,yaml] ----- -spec: - reporting-operator: - spec: - config: - prometheus: - certificateAuthority: - useServiceAccountCA: false - configMap: - enabled: true - create: true - name: reporting-operator-certificate-authority-config - filename: "internal-ca.crt" - value: | - -----BEGIN CERTIFICATE----- - (snip) - -----END CERTIFICATE----- ----- -+ -Alternatively, to use the system certificate authorities for publicly valid certificates, set both `useServiceAccountCA` and `configMap.enabled` to `false`. - -* Specify a bearer token to authenticate with Prometheus. For example: - -[source,yaml] ----- -spec: - reporting-operator: - spec: - config: - prometheus: - metricsImporter: - auth: - useServiceAccountToken: false - tokenSecret: - enabled: true - create: true - value: "abc-123" ----- diff --git a/modules/metering-reports.adoc b/modules/metering-reports.adoc deleted file mode 100644 index e9cb4025d9e7..000000000000 --- a/modules/metering-reports.adoc +++ /dev/null @@ -1,381 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-about-reports.adoc -[id="metering-reports_{context}"] -= Reports - -The `Report` custom resource is used to manage the execution and status of reports. Metering produces reports derived from usage data sources, which can be used in further analysis and filtering. A single `Report` resource represents a job that manages a database table and updates it with new information according to a schedule. The report exposes the data in that table via the Reporting Operator HTTP API. - -Reports with a `spec.schedule` field set are always running and track what time periods it has collected data for. This ensures that if metering is shutdown or unavailable for an extended period of time, it backfills the data starting where it left off. If the schedule is unset, then the report runs once for the time specified by the `reportingStart` and `reportingEnd`. By default, reports wait for `ReportDataSource` resources to have fully imported any data covered in the reporting period. If the report has a schedule, it waits to run until the data in the period currently being processed has finished importing. - -[id="metering-example-report-with-schedule_{context}"] -== Example report with a schedule - -The following example `Report` object contains information on every pod's CPU requests, and runs every hour, adding the last hours worth of data each time it runs. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - reportingStart: "2021-07-01T00:00:00Z" - schedule: - period: "hourly" - hourly: - minute: 0 - second: 0 ----- - -[id="metering-example-report-without-schedule_{context}"] -== Example report without a schedule (run-once) - -The following example `Report` object contains information on every pod's CPU requests for all of July. After completion, it does not run again. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - reportingStart: "2021-07-01T00:00:00Z" - reportingEnd: "2021-07-31T00:00:00Z" ----- - -[id="metering-query_{context}"] -== query - -The `query` field names the `ReportQuery` resource used to generate the report. The report query controls the schema of the report as well as how the results are processed. - -*`query` is a required field.* - -Use the following command to list available `ReportQuery` resources: - -[source,terminal] ----- -$ oc -n openshift-metering get reportqueries ----- - -.Example output -[source,terminal] ----- -NAME AGE -cluster-cpu-capacity 23m -cluster-cpu-capacity-raw 23m -cluster-cpu-usage 23m -cluster-cpu-usage-raw 23m -cluster-cpu-utilization 23m -cluster-memory-capacity 23m -cluster-memory-capacity-raw 23m -cluster-memory-usage 23m -cluster-memory-usage-raw 23m -cluster-memory-utilization 23m -cluster-persistentvolumeclaim-request 23m -namespace-cpu-request 23m -namespace-cpu-usage 23m -namespace-cpu-utilization 23m -namespace-memory-request 23m -namespace-memory-usage 23m -namespace-memory-utilization 23m -namespace-persistentvolumeclaim-request 23m -namespace-persistentvolumeclaim-usage 23m -node-cpu-allocatable 23m -node-cpu-allocatable-raw 23m -node-cpu-capacity 23m -node-cpu-capacity-raw 23m -node-cpu-utilization 23m -node-memory-allocatable 23m -node-memory-allocatable-raw 23m -node-memory-capacity 23m -node-memory-capacity-raw 23m -node-memory-utilization 23m -persistentvolumeclaim-capacity 23m -persistentvolumeclaim-capacity-raw 23m -persistentvolumeclaim-phase-raw 23m -persistentvolumeclaim-request 23m -persistentvolumeclaim-request-raw 23m -persistentvolumeclaim-usage 23m -persistentvolumeclaim-usage-raw 23m -persistentvolumeclaim-usage-with-phase-raw 23m -pod-cpu-request 23m -pod-cpu-request-raw 23m -pod-cpu-usage 23m -pod-cpu-usage-raw 23m -pod-memory-request 23m -pod-memory-request-raw 23m -pod-memory-usage 23m -pod-memory-usage-raw 23m ----- - -Report queries with the `-raw` suffix are used by other `ReportQuery` resources to build more complex queries, and should not be used directly for reports. - -`namespace-` prefixed queries aggregate pod CPU and memory requests by namespace, providing a list of namespaces and their overall usage based on resource requests. - -`pod-` prefixed queries are similar to `namespace-` prefixed queries but aggregate information by pod rather than namespace. These queries include the pod's namespace and node. - -`node-` prefixed queries return information about each node's total available resources. - -`aws-` prefixed queries are specific to AWS. Queries suffixed with `-aws` return the same data as queries of the same name without the suffix, and correlate usage with the EC2 billing data. - -The `aws-ec2-billing-data` report is used by other queries, and should not be used as a standalone report. The `aws-ec2-cluster-cost` report provides a total cost based on the nodes included in the cluster, and the sum of their costs for the time period being reported on. - -Use the following command to get the `ReportQuery` resource as YAML, and check the `spec.columns` field. For example, run: - -[source,terminal] ----- -$ oc -n openshift-metering get reportqueries namespace-memory-request -o yaml ----- - -.Example output -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: ReportQuery -metadata: - name: namespace-memory-request - labels: - operator-metering: "true" -spec: - columns: - - name: period_start - type: timestamp - unit: date - - name: period_end - type: timestamp - unit: date - - name: namespace - type: varchar - unit: kubernetes_namespace - - name: pod_request_memory_byte_seconds - type: double - unit: byte_seconds ----- - -[id="metering-schedule_{context}"] -== schedule - -The `spec.schedule` configuration block defines when the report runs. The main fields in the `schedule` section are `period`, and then depending on the value of `period`, the fields `hourly`, `daily`, `weekly`, and `monthly` allow you to fine-tune when the report runs. - -For example, if `period` is set to `weekly`, you can add a `weekly` field to the `spec.schedule` block. The following example will run once a week on Wednesday, at 1 PM (hour 13 in the day). - -[source,yaml] ----- -... - schedule: - period: "weekly" - weekly: - dayOfWeek: "wednesday" - hour: 13 -... ----- - -[id="metering-period_{context}"] -=== period - -Valid values of `schedule.period` are listed below, and the options available to set for a given period are also listed. - -* `hourly` -** `minute` -** `second` -* `daily` -** `hour` -** `minute` -** `second` -* `weekly` -** `dayOfWeek` -** `hour` -** `minute` -** `second` -* `monthly` -** `dayOfMonth` -** `hour` -** `minute` -** `second` -* `cron` -** `expression` - -Generally, the `hour`, `minute`, `second` fields control when in the day the report runs, and `dayOfWeek`/`dayOfMonth` control what day of the week, or day of month the report runs on, if it is a weekly or monthly report period. - -For each of these fields, there is a range of valid values: - -* `hour` is an integer value between 0-23. -* `minute` is an integer value between 0-59. -* `second` is an integer value between 0-59. -* `dayOfWeek` is a string value that expects the day of the week (spelled out). -* `dayOfMonth` is an integer value between 1-31. - -For cron periods, normal cron expressions are valid: - -* `expression: "*/5 * * * *"` - -[id="metering-reportingStart_{context}"] -== reportingStart - -To support running a report against existing data, you can set the `spec.reportingStart` field to a link:https://tools.ietf.org/html/rfc3339#section-5.8[RFC3339 timestamp] to tell the report to run according to its `schedule` starting from `reportingStart` rather than the current time. - -[NOTE] -==== -Setting the `spec.reportingStart` field to a specific time will result in the Reporting Operator running many queries in succession for each interval in the schedule that is between the `reportingStart` time and the current time. This could be thousands of queries if the period is less than daily and the `reportingStart` is more than a few months back. If `reportingStart` is left unset, the report will run at the next full `reportingPeriod` after the time the report is created. -==== - -As an example of how to use this field, if you had data already collected dating back to January 1st, 2019 that you want to include in your `Report` object, you can create a report with the following values: - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - schedule: - period: "hourly" - reportingStart: "2021-01-01T00:00:00Z" ----- - -[id="metering-reportingEnd_{context}"] -== reportingEnd - -To configure a report to only run until a specified time, you can set the `spec.reportingEnd` field to an link:https://tools.ietf.org/html/rfc3339#section-5.8[RFC3339 timestamp]. The value of this field will cause the report to stop running on its schedule after it has finished generating reporting data for the period covered from its start time until `reportingEnd`. - -Because a schedule will most likely not align with the `reportingEnd`, the last period in the schedule will be shortened to end at the specified `reportingEnd` time. If left unset, then the report will run forever, or until a `reportingEnd` is set on the report. - -For example, if you want to create a report that runs once a week for the month of July: - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - schedule: - period: "weekly" - reportingStart: "2021-07-01T00:00:00Z" - reportingEnd: "2021-07-31T00:00:00Z" ----- - -[id="metering-expiration_{context}"] -== expiration - -Add the `expiration` field to set a retention period on a scheduled metering report. You can avoid manually removing the report by setting the `expiration` duration value. The retention period is equal to the `Report` object `creationDate` plus the `expiration` duration. The report is removed from the cluster at the end of the retention period if no other reports or report queries depend on the expiring report. Deleting the report from the cluster can take several minutes. - -[NOTE] -==== -Setting the `expiration` field is not recommended for roll-up or aggregated reports. If a report is depended upon by other reports or report queries, then the report is not removed at the end of the retention period. You can view the `report-operator` logs at debug level for the timing output around a report retention decision. -==== - -For example, the following scheduled report is deleted 30 minutes after the `creationDate` of the report: - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - schedule: - period: "weekly" - reportingStart: "2021-07-01T00:00:00Z" - expiration: "30m" <1> ----- -<1> Valid time units for the `expiration` duration are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. - -[NOTE] -==== -The `expiration` retention period for a `Report` object is not precise and works on the order of several minutes, not nanoseconds. -==== - -[id="metering-runImmediately_{context}"] -== runImmediately - -When `runImmediately` is set to `true`, the report runs immediately. This behavior ensures that the report is immediately processed and queued without requiring additional scheduling parameters. - -[NOTE] -==== -When `runImmediately` is set to `true`, you must set a `reportingEnd` and `reportingStart` value. -==== - -[id="metering-inputs_{context}"] -== inputs - -The `spec.inputs` field of a `Report` object can be used to override or set values defined in a `ReportQuery` resource's `spec.inputs` field. - -`spec.inputs` is a list of name-value pairs: - -[source,yaml] ----- -spec: - inputs: - - name: "NamespaceCPUUsageReportName" <1> - value: "namespace-cpu-usage-hourly" <2> ----- - -<1> The `name` of an input must exist in the ReportQuery's `inputs` list. -<2> The `value` of the input must be the correct type for the input's `type`. - -// TODO(chance): include modules/metering-reportquery-inputs.adoc module - -[id="metering-roll-up-reports_{context}"] -== Roll-up reports - -Report data is stored in the database much like metrics themselves, and therefore, can be used in aggregated or roll-up reports. A simple use case for a roll-up report is to spread the time required to produce a report over a longer period of time. This is instead of requiring a monthly report to query and add all data over an entire month. For example, the task can be split into daily reports that each run over 1/30 of the data. - -A custom roll-up report requires a custom report query. The `ReportQuery` resource template processor provides a `reportTableName` function that can get the necessary table name from a `Report` object's `metadata.name`. - -Below is a snippet taken from a built-in query: - -.pod-cpu.yaml -[source,yaml] ----- -spec: -... - inputs: - - name: ReportingStart - type: time - - name: ReportingEnd - type: time - - name: NamespaceCPUUsageReportName - type: Report - - name: PodCpuUsageRawDataSourceName - type: ReportDataSource - default: pod-cpu-usage-raw -... - - query: | -... - {|- if .Report.Inputs.NamespaceCPUUsageReportName |} - namespace, - sum(pod_usage_cpu_core_seconds) as pod_usage_cpu_core_seconds - FROM {| .Report.Inputs.NamespaceCPUUsageReportName | reportTableName |} -... ----- - -.Example `aggregated-report.yaml` roll-up report -[source,yaml] ----- -spec: - query: "namespace-cpu-usage" - inputs: - - name: "NamespaceCPUUsageReportName" - value: "namespace-cpu-usage-hourly" ----- - -// TODO(chance): replace the comment below with an include on the modules/metering-rollup-report.adoc -// For more information on setting up a roll-up report, see the [roll-up report guide](rollup-reports.md). - -[id="metering-report-status_{context}"] -=== Report status - -The execution of a scheduled report can be tracked using its status field. Any errors occurring during the preparation of a report will be recorded here. - -The `status` field of a `Report` object currently has two fields: - -* `conditions`: Conditions is a list of conditions, each of which have a `type`, `status`, `reason`, and `message` field. Possible values of a condition's `type` field are `Running` and `Failure`, indicating the current state of the scheduled report. The `reason` indicates why its `condition` is in its current state with the `status` being either `true`, `false` or, `unknown`. The `message` provides a human readable indicating why the condition is in the current state. For detailed information on the `reason` values, see link:https://github.com/operator-framework/operator-metering/blob/master/pkg/apis/metering/v1/util/report_util.go#L10[`pkg/apis/metering/v1/util/report_util.go`]. -* `lastReportTime`: Indicates the time metering has collected data up to. diff --git a/modules/metering-store-data-in-azure.adoc b/modules/metering-store-data-in-azure.adoc deleted file mode 100644 index a193836d22ba..000000000000 --- a/modules/metering-store-data-in-azure.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-azure_{context}"] -= Storing data in Microsoft Azure - -To store data in Azure blob storage, you must use an existing container. - -.Procedure - -. Edit the `spec.storage` section in the `azure-blob-storage.yaml` file: -+ -.Example `azure-blob-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "azure" - azure: - container: "bucket1" <1> - secretName: "my-azure-secret" <2> - rootDirectory: "/testDir" <3> ----- -<1> Specify the container name. -<2> Specify a secret in the metering namespace. See the example `Secret` object below for more details. -<3> Optional: Specify the directory where you would like to store your data. - -. Use the following `Secret` object as a template: -+ -.Example Azure `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-azure-secret -data: - azure-storage-account-name: "dGVzdAo=" - azure-secret-access-key: "c2VjcmV0Cg==" ----- - -. Create the secret: -+ -[source,terminal] ----- -$ oc create secret -n openshift-metering generic my-azure-secret \ - --from-literal=azure-storage-account-name=my-storage-account-name \ - --from-literal=azure-secret-access-key=my-secret-key ----- diff --git a/modules/metering-store-data-in-gcp.adoc b/modules/metering-store-data-in-gcp.adoc deleted file mode 100644 index 8a39f891ab18..000000000000 --- a/modules/metering-store-data-in-gcp.adoc +++ /dev/null @@ -1,53 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-gcp_{context}"] -= Storing data in Google Cloud Storage - -To store your data in Google Cloud Storage, you must use an existing bucket. - -.Procedure - -. Edit the `spec.storage` section in the `gcs-storage.yaml` file: -+ -.Example `gcs-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "gcs" - gcs: - bucket: "metering-gcs/test1" <1> - secretName: "my-gcs-secret" <2> ----- -<1> Specify the name of the bucket. You can optionally specify the directory within the bucket where you would like to store your data. -<2> Specify a secret in the metering namespace. See the example `Secret` object below for more details. - -. Use the following `Secret` object as a template: -+ -.Example Google Cloud Storage `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-gcs-secret -data: - gcs-service-account.json: "c2VjcmV0Cg==" ----- - -. Create the secret: -+ -[source,terminal] ----- -$ oc create secret -n openshift-metering generic my-gcs-secret \ - --from-file gcs-service-account.json=/path/to/my/service-account-key.json ----- diff --git a/modules/metering-store-data-in-s3-compatible.adoc b/modules/metering-store-data-in-s3-compatible.adoc deleted file mode 100644 index 1484c0281d36..000000000000 --- a/modules/metering-store-data-in-s3-compatible.adoc +++ /dev/null @@ -1,48 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-s3-compatible_{context}"] -= Storing data in S3-compatible storage - -You can use S3-compatible storage such as Noobaa. - -.Procedure - -. Edit the `spec.storage` section in the `s3-compatible-storage.yaml` file: -+ -.Example `s3-compatible-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "s3Compatible" - s3Compatible: - bucket: "bucketname" <1> - endpoint: "http://example:port-number" <2> - secretName: "my-aws-secret" <3> ----- -<1> Specify the name of your S3-compatible bucket. -<2> Specify the endpoint for your storage. -<3> The name of a secret in the metering namespace containing the AWS credentials in the `data.aws-access-key-id` and `data.aws-secret-access-key` fields. See the example `Secret` object below for more details. - -. Use the following `Secret` object as a template: -+ -.Example S3-compatible `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-aws-secret -data: - aws-access-key-id: "dGVzdAo=" - aws-secret-access-key: "c2VjcmV0Cg==" ----- diff --git a/modules/metering-store-data-in-s3.adoc b/modules/metering-store-data-in-s3.adoc deleted file mode 100644 index 41199e170c37..000000000000 --- a/modules/metering-store-data-in-s3.adoc +++ /dev/null @@ -1,136 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-s3_{context}"] -= Storing data in Amazon S3 - -Metering can use an existing Amazon S3 bucket or create a bucket for storage. - -[NOTE] -==== -Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data. -==== - -.Procedure - -. Edit the `spec.storage` section in the `s3-storage.yaml` file: -+ -.Example `s3-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "s3" - s3: - bucket: "bucketname/path/" <1> - region: "us-west-1" <2> - secretName: "my-aws-secret" <3> - # Set to false if you want to provide an existing bucket, instead of - # having metering create the bucket on your behalf. - createBucket: true <4> ----- -<1> Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket. -<2> Specify the region of your bucket. -<3> The name of a secret in the metering namespace containing the AWS credentials in the `data.aws-access-key-id` and `data.aws-secret-access-key` fields. See the example `Secret` object below for more details. -<4> Set this field to `false` if you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that have `CreateBucket` permissions. - -. Use the following `Secret` object as a template: -+ -.Example AWS `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-aws-secret -data: - aws-access-key-id: "dGVzdAo=" - aws-secret-access-key: "c2VjcmV0Cg==" ----- -+ -[NOTE] -==== -The values of the `aws-access-key-id` and `aws-secret-access-key` must be base64 encoded. -==== - -. Create the secret: -+ -[source,terminal] ----- -$ oc create secret -n openshift-metering generic my-aws-secret \ - --from-literal=aws-access-key-id=my-access-key \ - --from-literal=aws-secret-access-key=my-secret-key ----- -+ -[NOTE] -==== -This command automatically base64 encodes your `aws-access-key-id` and `aws-secret-access-key` values. -==== - -The `aws-access-key-id` and `aws-secret-access-key` credentials must have read and write access to the bucket. The following `aws/read-write.json` file shows an IAM policy that grants the required permissions: - -.Example `aws/read-write.json` file -[source,json] ----- -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "1", - "Effect": "Allow", - "Action": [ - "s3:AbortMultipartUpload", - "s3:DeleteObject", - "s3:GetObject", - "s3:HeadBucket", - "s3:ListBucket", - "s3:ListMultipartUploadParts", - "s3:PutObject" - ], - "Resource": [ - "arn:aws:s3:::operator-metering-data/*", - "arn:aws:s3:::operator-metering-data" - ] - } - ] -} ----- - -If `spec.storage.hive.s3.createBucket` is set to `true` or unset in your `s3-storage.yaml` file, then you should use the `aws/read-write-create.json` file that contains permissions for creating and deleting buckets: - -.Example `aws/read-write-create.json` file -[source,json] ----- -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "1", - "Effect": "Allow", - "Action": [ - "s3:AbortMultipartUpload", - "s3:DeleteObject", - "s3:GetObject", - "s3:HeadBucket", - "s3:ListBucket", - "s3:CreateBucket", - "s3:DeleteBucket", - "s3:ListMultipartUploadParts", - "s3:PutObject" - ], - "Resource": [ - "arn:aws:s3:::operator-metering-data/*", - "arn:aws:s3:::operator-metering-data" - ] - } - ] -} ----- diff --git a/modules/metering-store-data-in-shared-volumes.adoc b/modules/metering-store-data-in-shared-volumes.adoc deleted file mode 100644 index a3a73285a5fe..000000000000 --- a/modules/metering-store-data-in-shared-volumes.adoc +++ /dev/null @@ -1,150 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-shared-volumes_{context}"] -= Storing data in shared volumes - -Metering does not configure storage by default. However, you can use any ReadWriteMany persistent volume (PV) or any storage class that provisions a ReadWriteMany PV for metering storage. - -[NOTE] -==== -NFS is not recommended to use in production. Using an NFS server on RHEL as a storage back end can fail to meet metering requirements and to provide the performance that is needed for the Metering Operator to work appropriately. - -Other NFS implementations on the marketplace might not have these issues, such as a Parallel Network File System (pNFS). pNFS is an NFS implementation with distributed and parallel capability. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against {product-title} core components. -==== - -.Procedure - -. Modify the `shared-storage.yaml` file to use a ReadWriteMany persistent volume for storage: -+ -.Example `shared-storage.yaml` file --- -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "sharedPVC" - sharedPVC: - claimName: "metering-nfs" <1> - # Uncomment the lines below to provision a new PVC using the specified storageClass. <2> - # createPVC: true - # storageClass: "my-nfs-storage-class" - # size: 5Gi ----- - -Select one of the configuration options below: - -<1> Set `storage.hive.sharedPVC.claimName` to the name of an existing ReadWriteMany persistent volume claim (PVC). This configuration is necessary if you do not have dynamic volume provisioning or want to have more control over how the persistent volume is created. - -<2> Set `storage.hive.sharedPVC.createPVC` to `true` and set the `storage.hive.sharedPVC.storageClass` to the name of a storage class with ReadWriteMany access mode. This configuration uses dynamic volume provisioning to create a volume automatically. --- - -. Create the following resource objects that are required to deploy an NFS server for metering. Use the `oc create -f .yaml` command to create the object YAML files. - -.. Configure a `PersistentVolume` resource object: -+ -.Example `nfs_persistentvolume.yaml` file -[source,yaml] ----- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: nfs - labels: - role: nfs-server -spec: - capacity: - storage: 5Gi - accessModes: - - ReadWriteMany - storageClassName: nfs-server <1> - nfs: - path: "/" - server: REPLACEME - persistentVolumeReclaimPolicy: Delete ----- -<1> Must exactly match the `[kind: StorageClass].metadata.name` field value. - -.. Configure a `Pod` resource object with the `nfs-server` role: -+ -.Example `nfs_server.yaml` file -[source,yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: nfs-server - labels: - role: nfs-server -spec: - containers: - - name: nfs-server - image: <1> - imagePullPolicy: IfNotPresent - ports: - - name: nfs - containerPort: 2049 - securityContext: - privileged: true - volumeMounts: - - mountPath: "/mnt/data" - name: local - volumes: - - name: local - emptyDir: {} ----- -<1> Install your NFS server image. - -.. Configure a `Service` resource object with the `nfs-server` role: -+ -.Example `nfs_service.yaml` file -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: nfs-service - labels: - role: nfs-server -spec: - ports: - - name: 2049-tcp - port: 2049 - protocol: TCP - targetPort: 2049 - selector: - role: nfs-server - sessionAffinity: None - type: ClusterIP ----- - -.. Configure a `StorageClass` resource object: -+ -.Example `nfs_storageclass.yaml` file -[source,yaml] ----- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: nfs-server <1> -provisioner: example.com/nfs -parameters: - archiveOnDelete: "false" -reclaimPolicy: Delete -volumeBindingMode: Immediate ----- -<1> Must exactly match the `[kind: PersistentVolume].spec.storageClassName` field value. - - -[WARNING] -==== -Configuration of your NFS storage, and any relevant resource objects, will vary depending on the NFS server image that you use for metering storage. -==== diff --git a/modules/metering-troubleshooting.adoc b/modules/metering-troubleshooting.adoc deleted file mode 100644 index e0a857ced20f..000000000000 --- a/modules/metering-troubleshooting.adoc +++ /dev/null @@ -1,195 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-troubleshooting-debugging.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-troubleshooting_{context}"] -= Troubleshooting metering - -A common issue with metering is pods failing to start. Pods might fail to start due to lack of resources or if they have a dependency on a resource that does not exist, such as a `StorageClass` or `Secret` resource. - -[id="metering-not-enough-compute-resources_{context}"] -== Not enough compute resources - -A common issue when installing or running metering is a lack of compute resources. As the cluster grows and more reports are created, the Reporting Operator pod requires more memory. If memory usage reaches the pod limit, the cluster considers the pod out of memory (OOM) and terminates it with an `OOMKilled` status. Ensure that metering is allocated the minimum resource requirements described in the installation prerequisites. - -[NOTE] -==== -The Metering Operator does not autoscale the Reporting Operator based on the load in the cluster. Therefore, CPU usage for the Reporting Operator pod does not increase as the cluster grows. -==== - -To determine if the issue is with resources or scheduling, follow the troubleshooting instructions included in the Kubernetes document https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting[Managing Compute Resources for Containers]. - -To troubleshoot issues due to a lack of compute resources, check the following within the `openshift-metering` namespace. - -.Prerequisites - -* You are currently in the `openshift-metering` namespace. Change to the `openshift-metering` namespace by running: -+ -[source,terminal] ----- -$ oc project openshift-metering ----- - -.Procedure - -. Check for metering `Report` resources that fail to complete and show the status of `ReportingPeriodUnmetDependencies`: -+ -[source,terminal] ----- -$ oc get reports ----- -+ -.Example output -[source,terminal] ----- -NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE -namespace-cpu-utilization-adhoc-10 namespace-cpu-utilization Finished 2020-10-31T00:00:00Z 2m38s -namespace-cpu-utilization-adhoc-11 namespace-cpu-utilization ReportingPeriodUnmetDependencies 2m23s -namespace-memory-utilization-202010 namespace-memory-utilization ReportingPeriodUnmetDependencies 26s -namespace-memory-utilization-202011 namespace-memory-utilization ReportingPeriodUnmetDependencies 14s ----- - -. Check the `ReportDataSource` resources where the `NEWEST METRIC` is less than the report end date: -+ -[source,terminal] ----- -$ oc get reportdatasource ----- -+ -.Example output -[source,terminal] ----- -NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE -... -node-allocatable-cpu-cores 2020-04-23T09:14:00Z 2020-08-31T10:07:00Z 2020-04-23T09:14:00Z 2020-10-15T17:13:00Z 2020-12-09T12:45:10Z 230d -node-allocatable-memory-bytes 2020-04-23T09:14:00Z 2020-08-30T05:19:00Z 2020-04-23T09:14:00Z 2020-10-14T08:01:00Z 2020-12-09T12:45:12Z 230d -... -pod-usage-memory-bytes 2020-04-23T09:14:00Z 2020-08-24T20:25:00Z 2020-04-23T09:14:00Z 2020-10-09T23:31:00Z 2020-12-09T12:45:12Z 230d ----- - -. Check the health of the `reporting-operator` `Pod` resource for a high number of pod restarts: -+ -[source,terminal] ----- -$ oc get pods -l app=reporting-operator ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -reporting-operator-84f7c9b7b6-fr697 2/2 Running 542 8d <1> ----- -<1> The Reporting Operator pod is restarting at a high rate. - -. Check the `reporting-operator` `Pod` resource for an `OOMKilled` termination: -+ -[source,terminal] ----- -$ oc describe pod/reporting-operator-84f7c9b7b6-fr697 ----- -+ -.Example output -[source,terminal] ----- -Name: reporting-operator-84f7c9b7b6-fr697 -Namespace: openshift-metering -Priority: 0 -Node: ip-10-xx-xx-xx.ap-southeast-1.compute.internal/10.xx.xx.xx -... - Ports: 8080/TCP, 6060/TCP, 8082/TCP - Host Ports: 0/TCP, 0/TCP, 0/TCP - State: Running - Started: Thu, 03 Dec 2020 20:59:45 +1000 - Last State: Terminated - Reason: OOMKilled <1> - Exit Code: 137 - Started: Thu, 03 Dec 2020 20:38:05 +1000 - Finished: Thu, 03 Dec 2020 20:59:43 +1000 ----- -<1> The Reporting Operator pod was terminated due to OOM kill. - - -[id="metering-check-and-increase-memory-limits_{context}"] -=== Increasing the reporting-operator pod memory limit - -If you are experiencing an increase in pod restarts and OOM kill events, you can check the current memory limit set for the Reporting Operator pod. Increasing the memory limit allows the Reporting Operator pod to update the report data sources. If necessary, increase the memory limit in your `MeteringConfig` resource by 25% - 50%. - -.Procedure - -. Check the current memory limits of the `reporting-operator` `Pod` resource: -+ -[source,terminal] ----- -$ oc describe pod reporting-operator-67d6f57c56-79mrt ----- -+ -.Example output -[source,terminal] ----- -Name: reporting-operator-67d6f57c56-79mrt -Namespace: openshift-metering -Priority: 0 -... - Ports: 8080/TCP, 6060/TCP, 8082/TCP - Host Ports: 0/TCP, 0/TCP, 0/TCP - State: Running - Started: Tue, 08 Dec 2020 14:26:21 +1000 - Ready: True - Restart Count: 0 - Limits: - cpu: 1 - memory: 500Mi <1> - Requests: - cpu: 500m - memory: 250Mi - Environment: -... ----- -<1> The current memory limit for the Reporting Operator pod. - -. Edit the `MeteringConfig` resource to update the memory limit: -+ -[source,terminal] ----- -$ oc edit meteringconfig/operator-metering ----- -+ -.Example `MeteringConfig` resource -[source,yaml] ----- -kind: MeteringConfig -metadata: - name: operator-metering - namespace: openshift-metering -spec: - reporting-operator: - spec: - resources: <1> - limits: - cpu: 1 - memory: 750Mi - requests: - cpu: 500m - memory: 500Mi -... ----- -<1> Add or increase memory limits within the `resources` field of the `MeteringConfig` resource. -+ -[NOTE] -==== -If there continue to be numerous OOM killed events after memory limits are increased, this might indicate that a different issue is causing the reports to be in a pending state. -==== - -[id="metering-storageclass-not-configured_{context}"] -== StorageClass resource not configured - -Metering requires that a default `StorageClass` resource be configured for dynamic provisioning. - -See the documentation on configuring metering for information on how to check if there are any `StorageClass` resources configured for the cluster, how to set the default, and how to configure metering to use a storage class other than the default. - -[id="metering-secret-not-configured-correctly_{context}"] -== Secret not configured correctly - -A common issue with metering is providing the incorrect secret when configuring your persistent storage. Be sure to review the example configuration files and create you secret according to the guidelines for your storage provider. diff --git a/modules/metering-uninstall-crds.adoc b/modules/metering-uninstall-crds.adoc deleted file mode 100644 index 66bd61ec3ecc..000000000000 --- a/modules/metering-uninstall-crds.adoc +++ /dev/null @@ -1,28 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-uninstall.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-uninstall-crds_{context}"] -= Uninstalling metering custom resource definitions - -The metering custom resource definitions (CRDs) remain in the cluster after the Metering Operator is uninstalled and the `openshift-metering` namespace is deleted. - -[IMPORTANT] -==== -Deleting the metering CRDs disrupts any additional metering installations in other namespaces in your cluster. Ensure that there are no other metering installations before proceeding. -==== - -.Prerequisites - -* The `MeteringConfig` custom resource in the `openshift-metering` namespace is deleted. -* The `openshift-metering` namespace is deleted. - -.Procedure - -* Delete the remaining metering CRDs: -+ -[source,terminal] ----- -$ oc get crd -o name | grep "metering.openshift.io" | xargs oc delete ----- diff --git a/modules/metering-uninstall.adoc b/modules/metering-uninstall.adoc deleted file mode 100644 index 4cfedd8bd188..000000000000 --- a/modules/metering-uninstall.adoc +++ /dev/null @@ -1,36 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-uninstall.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-uninstall_{context}"] -= Uninstalling a metering namespace - -Uninstall your metering namespace, for example the `openshift-metering` namespace, by removing the `MeteringConfig` resource and deleting the `openshift-metering` namespace. - -.Prerequisites - -* The Metering Operator is removed from your cluster. - -.Procedure - -. Remove all resources created by the Metering Operator: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering delete meteringconfig --all ----- - -. After the previous step is complete, verify that all pods in the `openshift-metering` namespace are deleted or are reporting a terminating state: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering get pods ----- - -. Delete the `openshift-metering` namespace: -+ -[source,terminal] ----- -$ oc delete namespace openshift-metering ----- diff --git a/modules/metering-use-mysql-or-postgresql-for-hive.adoc b/modules/metering-use-mysql-or-postgresql-for-hive.adoc deleted file mode 100644 index 38ebb49072ec..000000000000 --- a/modules/metering-use-mysql-or-postgresql-for-hive.adoc +++ /dev/null @@ -1,89 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-hive-metastore.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-use-mysql-or-postgresql-for-hive_{context}"] -= Using MySQL or PostgreSQL for the Hive metastore - -The default installation of metering configures Hive to use an embedded Java database called Derby. This is unsuited for larger environments and can be replaced with either a MySQL or PostgreSQL database. Use the following example configuration files if your deployment requires a MySQL or PostgreSQL database for Hive. - -There are three configuration options you can use to control the database that is used by Hive metastore: `url`, `driver`, and `secretName`. - -Create your MySQL or Postgres instance with a user name and password. Then create a secret by using the OpenShift CLI (`oc`) or a YAML file. The `secretName` you create for this secret must map to the `spec.hive.spec.config.db.secretName` field in the `MeteringConfig` object resource. - -.Procedure - -. Create a secret using the OpenShift CLI (`oc`) or by using a YAML file: -+ -* Create a secret by using the following command: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering create secret generic --from-literal=username= --from-literal=password= ----- -+ -* Create a secret by using a YAML file. For example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: <1> -data: - username: <2> - password: <3> ----- -<1> The name of the secret. -<2> Base64 encoded database user name. -<3> Base64 encoded database password. - -. Create a configuration file to use a MySQL or PostgreSQL database for Hive: -+ -* To use a MySQL database for Hive, use the example configuration file below. Metering supports configuring the internal Hive metastore to use the MySQL server versions 5.6, 5.7, and 8.0. -+ --- -[source,yaml] ----- -spec: - hive: - spec: - metastore: - storage: - create: false - config: - db: - url: "jdbc:mysql://mysql.example.com:3306/hive_metastore" <1> - driver: "com.mysql.cj.jdbc.Driver" - secretName: "REPLACEME" <2> ----- -[NOTE] -==== -When configuring Metering to work with older MySQL server versions, such as 5.6 or 5.7, you might need to add the link:https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-usagenotes-known-issues-limitations.html[`enabledTLSProtocols` JDBC URL parameter] when configuring the internal Hive metastore. -==== -<1> To use the TLS v1.2 cipher suite, set `url` to `"jdbc:mysql://:/?enabledTLSProtocols=TLSv1.2"`. -<2> The name of the secret containing the base64-encrypted user name and password database credentials. --- -+ -You can pass additional JDBC parameters using the `spec.hive.config.url`. For more details, see the link:https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-configuration-properties.html[MySQL Connector/J 8.0 documentation]. -+ -* To use a PostgreSQL database for Hive, use the example configuration file below: -+ -[source,yaml] ----- -spec: - hive: - spec: - metastore: - storage: - create: false - config: - db: - url: "jdbc:postgresql://postgresql.example.com:5432/hive_metastore" - driver: "org.postgresql.Driver" - username: "" - password: "" ----- -+ -You can pass additional JDBC parameters using the `spec.hive.config.url`. For more details, see the link:https://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters[PostgreSQL JDBC driver documentation]. diff --git a/modules/metering-viewing-report-results.adoc b/modules/metering-viewing-report-results.adoc deleted file mode 100644 index 7f6910c515e9..000000000000 --- a/modules/metering-viewing-report-results.adoc +++ /dev/null @@ -1,103 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-using-metering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-viewing-report-results_{context}"] -= Viewing report results - -Viewing a report's results involves querying the reporting API route and authenticating to the API using your {product-title} credentials. -Reports can be retrieved as `JSON`, `CSV`, or `Tabular` formats. - -.Prerequisites - -* Metering is installed. -* To access report results, you must either be a cluster administrator, or you need to be granted access using the `report-exporter` role in the `openshift-metering` namespace. - -.Procedure - -. Change to the `openshift-metering` project: -+ -[source,terminal] ----- -$ oc project openshift-metering ----- - -. Query the reporting API for results: - -.. Create a variable for the metering `reporting-api` route then get the route: -+ -[source,terminal] ----- -$ meteringRoute="$(oc get routes metering -o jsonpath='{.spec.host}')" ----- -+ -[source,terminal] ----- -$ echo "$meteringRoute" ----- - -.. Get the token of your current user to be used in the request: -+ -[source,terminal] ----- -$ token="$(oc whoami -t)" ----- - -.. Set `reportName` to the name of the report you created: -+ -[source,terminal] ----- -$ reportName=namespace-cpu-request-2020 ----- - -.. Set `reportFormat` to one of `csv`, `json`, or `tabular` to specify the output format of the API response: -+ -[source,terminal] ----- -$ reportFormat=csv ----- - -.. To get the results, use `curl` to make a request to the reporting API for your report: -+ -[source,terminal] ----- -$ curl --insecure -H "Authorization: Bearer ${token}" "https://${meteringRoute}/api/v1/reports/get?name=${reportName}&namespace=openshift-metering&format=$reportFormat" ----- -+ -.Example output with `reportName=namespace-cpu-request-2020` and `reportFormat=csv` -[source,terminal] ----- -period_start,period_end,namespace,pod_request_cpu_core_seconds -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-apiserver,11745.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-apiserver-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-authentication,522.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-authentication-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cloud-credential-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-machine-approver,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-node-tuning-operator,3385.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-samples-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-version,522.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-console,522.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-console-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-controller-manager,7830.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-controller-manager-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-dns,34372.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-dns-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-etcd,23490.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-image-registry,5993.400000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ingress,5220.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ingress-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver,12528.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager,8613.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-machine-api,1305.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-machine-config-operator,9637.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-metering,19575.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-monitoring,6256.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-network-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ovn-kubernetes,94503.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-service-ca,783.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-service-ca-operator,261.000000 ----- diff --git a/modules/metering-writing-reports.adoc b/modules/metering-writing-reports.adoc deleted file mode 100644 index 4f4538f1046d..000000000000 --- a/modules/metering-writing-reports.adoc +++ /dev/null @@ -1,73 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-using-metering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-writing-reports_{context}"] -= Writing Reports - -Writing a report is the way to process and analyze data using metering. - -To write a report, you must define a `Report` resource in a YAML file, specify the required parameters, and create it in the `openshift-metering` namespace. - -.Prerequisites - -* Metering is installed. - -.Procedure - -. Change to the `openshift-metering` project: -+ -[source,terminal] ----- -$ oc project openshift-metering ----- - -. Create a `Report` resource as a YAML file: -+ -.. Create a YAML file with the following content: -+ -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: namespace-cpu-request-2020 <2> - namespace: openshift-metering -spec: - reportingStart: '2020-01-01T00:00:00Z' - reportingEnd: '2020-12-30T23:59:59Z' - query: namespace-cpu-request <1> - runImmediately: true <3> ----- -<1> The `query` specifies the `ReportQuery` resources used to generate the report. Change this based on what you want to report on. For a list of options, run `oc get reportqueries | grep -v raw`. -<2> Use a descriptive name about what the report does for `metadata.name`. A good name describes the query, and the schedule or period you used. -<3> Set `runImmediately` to `true` for it to run with whatever data is available, or set it to `false` if you want it to wait for `reportingEnd` to pass. - -.. Run the following command to create the `Report` resource: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- -+ -.Example output -[source,terminal] ----- -report.metering.openshift.io/namespace-cpu-request-2020 created ----- -+ - -. You can list reports and their `Running` status with the following command: -+ -[source,terminal] ----- -$ oc get reports ----- -+ -.Example output -[source,terminal] ----- -NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE -namespace-cpu-request-2020 namespace-cpu-request Finished 2020-12-30T23:59:59Z 26s ----- diff --git a/modules/minimum-ibm-z-system-requirements.adoc b/modules/minimum-ibm-z-system-requirements.adoc deleted file mode 100644 index a855a5454c84..000000000000 --- a/modules/minimum-ibm-z-system-requirements.adoc +++ /dev/null @@ -1,120 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_ibm_z/installing-ibm-z.adoc -// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc -// * installing/installing_ibm_z/installing-ibm-z-lpar.adoc -// * installing/installing_ibm_z/installing-restricted-networks-ibm-z-lpar.adoc - -ifeval::["{context}" == "installing-ibm-z"] -:ibm-z: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:ibm-z: -endif::[] -ifeval::["{context}" == "installing-ibm-z-lpar"] -:ibm-z-lpar: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z-lpar"] -:ibm-z-lpar: -endif::[] - -:_mod-docs-content-type: CONCEPT -[id="minimum-ibm-z-system-requirements_{context}"] -= Minimum {ibm-z-title} system environment - -You can install {product-title} version {product-version} on the following {ibm-name} hardware: - -* {ibm-name} z16 (all models), {ibm-name} z15 (all models), {ibm-name} z14 (all models) -* {ibm-linuxone-name} 4 (all models), {ibm-linuxone-name} III (all models), {ibm-linuxone-name} Emperor II, {ibm-linuxone-name} Rockhopper II - -ifdef::ibm-z-lpar[] -[IMPORTANT] -==== -When running {product-title} on {ibm-z-name} without a hypervisor use the Dynamic Partition Manager (DPM) to manage your machine. -// Once blog url is available add: For details see blog... -==== -endif::ibm-z-lpar[] - - -== Hardware requirements - -* The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. -* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster. - -[NOTE] -==== -You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of {ibm-z-name}. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster. -==== - -[IMPORTANT] -==== -Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. -==== - - -== Operating system requirements - -ifdef::ibm-z[] -* One instance of z/VM 7.2 or later - -On your z/VM instance, set up: - -* Three guest virtual machines for {product-title} control plane machines -* Two guest virtual machines for {product-title} compute machines -* One guest virtual machine for the temporary {product-title} bootstrap machine -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* Five logical partitions (LPARs) -** Three LPARs for {product-title} control plane machines -** Two LPARs for {product-title} compute machines -* One machine for the temporary {product-title} bootstrap machine -endif::ibm-z-lpar[] - - -== {ibm-z-title} network connectivity requirements - -ifdef::ibm-z[] -To install on {ibm-z-name} under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: - -* A direct-attached OSA or RoCE network adapter -* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. -endif::ibm-z[] -ifdef::ibm-z-lpar[] -To install on {ibm-z-name} in an LPAR, you need: - -* A direct-attached OSA or RoCE network adapter -* For a preferred setup, use OSA link aggregation. -endif::ibm-z-lpar[] - - -=== Disk storage - -ifdef::ibm-z[] -* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. -* FCP attached disk storage -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. -* FCP attached disk storage -* NVMe disk storage -endif::ibm-z-lpar[] - - -=== Storage / Main Memory - -* 16 GB for {product-title} control plane machines -* 8 GB for {product-title} compute machines -* 16 GB for the temporary {product-title} bootstrap machine - -ifeval::["{context}" == "installing-ibm-z"] -:!ibm-z: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:!ibm-z: -endif::[] -ifeval::["{context}" == "installing-ibm-z-lpar"] -:!ibm-z-lpar: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z-lpar"] -:!ibm-z-lpar: -endif::[] \ No newline at end of file diff --git a/modules/mod-docs-ocp-conventions.adoc b/modules/mod-docs-ocp-conventions.adoc deleted file mode 100644 index df0ff06c42b0..000000000000 --- a/modules/mod-docs-ocp-conventions.adoc +++ /dev/null @@ -1,154 +0,0 @@ -// Module included in the following assemblies: -// -// * mod_docs_guide/mod-docs-conventions-ocp.adoc - -// Base the file name and the ID on the module title. For example: -// * file name: my-reference-a.adoc -// * ID: [id="my-reference-a"] -// * Title: = My reference A - -[id="mod-docs-ocp-conventions_{context}"] -= Modular docs OpenShift conventions - -These Modular Docs conventions for OpenShift docs build on top of the CCS -modular docs guidelines. - -These guidelines and conventions should be read along with the: - -* General CCS -link:https://redhat-documentation.github.io/modular-docs/[modular docs guidelines]. -* link:https://redhat-documentation.github.io/asciidoc-markup-conventions/[AsciiDoc markup conventions] -* link:https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/contributing.adoc[OpenShift Contribution Guide] -* link:https://github.com/openshift/openshift-docs/blob/main/contributing_to_docs/doc_guidelines.adoc[OpenShift Documentation Guidelines] - -IMPORTANT: If some convention is duplicated, the convention in this guide -supersedes all others. - -[id="ocp-ccs-conventions_{context}"] -== OpenShift CCS conventions - -* All assemblies must define a context that is unique. -+ -Add this context at the top of the page, just before the first anchor id. -+ -Example: -+ ----- -:context: assembly-gsg ----- - -* All assemblies must include the `_attributes/common-attributes.adoc` file near the -context statement. This file contains the standard attributes for the collection. -+ -`include::_attributes/common-attributes.adoc[leveloffset=+1]` - -* All anchor ids must follow the format: -+ ----- -[id="_{context}"] ----- -+ -Anchor name is _connected_ to the `{context}` using a dash. -+ -Example: -+ ----- -[id="creating-your-first-content_{context}"] ----- - -* All modules anchor ids must have the `{context}` variable. -+ -This is just reiterating the format described in the previous bullet point. - -* A comment section must be present at the top of each module and assembly, as -shown in the link:https://github.com/redhat-documentation/modular-docs/tree/master/modular-docs-manual/files[modular docs templates]. -+ -The modules comment section must list which assemblies this module has been -included in, while the assemblies comment section must include other assemblies -that it itself is included in, if any. -+ -Example comment section in an assembly: -+ ----- -// This assembly is included in the following assemblies: -// -// NONE ----- -+ -Example comment section in a module: -+ ----- -// Module included in the following assemblies: -// -// mod_docs_guide/mod-docs-conventions-ocp.adoc ----- - -* All modules must go in the modules directory which is present in the top level -of the openshift-docs repository. These modules must follow the file naming -conventions specified in the -link:https://redhat-documentation.github.io/modular-docs/[modular docs guidelines]. - -* All assemblies must go in the relevant guide/book. If you cannot find a relevant - guide/book, reach out to a member of the OpenShift CCS team. So guides/books contain assemblies, which - contain modules. - -* modules and images folders are symlinked to the top level folder from each book/guide folder. - -* In your assemblies, when you are linking to the content in other books, you must -use the relative path starting like so: -+ ----- -xref:../architecture/architecture.adoc#architecture[architecture] overview. ----- -+ -[IMPORTANT] -==== -You must not include xrefs in modules or create an xref to a module. You can -only use xrefs to link from one assembly to another. -==== - -* All modules in assemblies must be included using the following format (replace 'ilude' with 'include'): -+ -`ilude::modules/.adoc[]` -+ -_OR_ -+ -`ilude::modules/.adoc[leveloffset=+]` -+ -if it requires a leveloffset. -+ -Example: -+ -`include::modules/creating-your-first-content.adoc[leveloffset=+1]` - -NOTE: There is no `..` at the starting of the path. - -//// -* If your assembly is in a subfolder of a guide/book directory, you must add a -statement to the assembly's metadata to use `relfileprefix`. -+ -This adjusts all the xref links in your modules to start from the root -directory. -+ -At the top of the assembly (in the metadata section), add the following line: -+ ----- -:relfileprefix: ../ ----- -+ -NOTE: There is a space between the second : and the ../. - -+ -The only difference in including a module in the _install_config/index.adoc_ -assembly and _install_config/install/planning.adoc_ assembly is the addition of -the `:relfileprefix: ../` attribute at the top of the -_install_config/install/planning.adoc_ assembly. The actual inclusion of -module remains the same as described in the previous bullet. - -+ -NOTE: This strategy is in place so that links resolve correctly on both -docs.openshift.com and portal docs. -//// - -* Do not use 3rd level folders even though AsciiBinder permits it. If you need -to, work out a better way to organize your content. diff --git a/modules/multi-architecture-scheduling-overview.adoc b/modules/multi-architecture-scheduling-overview.adoc deleted file mode 100644 index 4bc4c782604a..000000000000 --- a/modules/multi-architecture-scheduling-overview.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// module included in the following assembly -// -//post_installation_configuration/configuring-multi-arch-compute-machines/multi-architecture-compute-managing.adoc - -:_mod-docs-content-type: CONCEPT -[id="multi-architecture-scheduling-overview_{context}"] -= Scheduling workloads on clusters with multi-architecture compute machines - -Before deploying a workload onto a cluster with compute nodes of different architectures, you must configure your compute node scheduling process so the pods in your cluster are correctly assigned. - -You can schedule workloads onto multi-architecture nodes for your cluster in several ways. For example, you can use a node affinity or a node selector to select the node you want the pod to schedule onto. You can also use scheduling mechanisms, like taints and tolderations, when using node affinity or node selector to correctly schedule workloads. - - diff --git a/modules/nbde-managing-encryption-keys.adoc b/modules/nbde-managing-encryption-keys.adoc deleted file mode 100644 index 25d0849f1a78..000000000000 --- a/modules/nbde-managing-encryption-keys.adoc +++ /dev/null @@ -1,10 +0,0 @@ -// Module included in the following assemblies: -// -// security/nbde-implementation-guide.adoc - -[id="nbde-managing-encryption-keys_{context}"] -= Tang server encryption key management - -The cryptographic mechanism to recreate the encryption key is based on the _blinded key_ stored on the node and the private key of the involved Tang servers. To protect against the possibility of an attacker who has obtained both the Tang server private key and the node’s encrypted disk, periodic rekeying is advisable. - -You must perform the rekeying operation for every node before you can delete the old key from the Tang server. The following sections provide procedures for rekeying and deleting old keys. diff --git a/modules/nw-egress-ips-automatic.adoc b/modules/nw-egress-ips-automatic.adoc deleted file mode 100644 index 38e17a6a92a0..000000000000 --- a/modules/nw-egress-ips-automatic.adoc +++ /dev/null @@ -1,89 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/openshift_sdn/assigning-egress-ips.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-ips-automatic_{context}"] -= Configuring automatically assigned egress IP addresses for a namespace - -In {product-title} you can enable automatic assignment of an egress IP address -for a specific namespace across one or more nodes. - -.Prerequisites - -* You have access to the cluster as a user with the `cluster-admin` role. -* You have installed the OpenShift CLI (`oc`). - -.Procedure - -. Update the `NetNamespace` object with the egress IP address using the -following JSON: -+ -[source,terminal] ----- - $ oc patch netnamespace --type=merge -p \ - '{ - "egressIPs": [ - "" - ] - }' ----- -+ --- -where: - -``:: Specifies the name of the project. -``:: Specifies one or more egress IP addresses for the `egressIPs` array. --- -+ -For example, to assign `project1` to an IP address of 192.168.1.100 and -`project2` to an IP address of 192.168.1.101: -+ -[source,terminal] ----- -$ oc patch netnamespace project1 --type=merge -p \ - '{"egressIPs": ["192.168.1.100"]}' -$ oc patch netnamespace project2 --type=merge -p \ - '{"egressIPs": ["192.168.1.101"]}' ----- -+ -[NOTE] -==== -Because OpenShift SDN manages the `NetNamespace` object, you can make changes only by modifying the existing `NetNamespace` object. Do not create a new `NetNamespace` object. -==== - -. Indicate which nodes can host egress IP addresses by setting the `egressCIDRs` -parameter for each host using the following JSON: -+ -[source,terminal] ----- -$ oc patch hostsubnet --type=merge -p \ - '{ - "egressCIDRs": [ - "", "" - ] - }' ----- -+ --- -where: - -``:: Specifies a node name. -``:: Specifies an IP address range in CIDR format. You can specify more than one address range for the `egressCIDRs` array. --- -+ -For example, to set `node1` and `node2` to host egress IP addresses -in the range 192.168.1.0 to 192.168.1.255: -+ -[source,terminal] ----- -$ oc patch hostsubnet node1 --type=merge -p \ - '{"egressCIDRs": ["192.168.1.0/24"]}' -$ oc patch hostsubnet node2 --type=merge -p \ - '{"egressCIDRs": ["192.168.1.0/24"]}' ----- -+ -{product-title} automatically assigns specific egress IP addresses to -available nodes in a balanced way. In this case, it assigns the egress IP -address 192.168.1.100 to `node1` and the egress IP address 192.168.1.101 to -`node2` or vice versa. diff --git a/modules/nw-egress-ips-static.adoc b/modules/nw-egress-ips-static.adoc deleted file mode 100644 index 7b2dd2863fa5..000000000000 --- a/modules/nw-egress-ips-static.adoc +++ /dev/null @@ -1,86 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-ips-static_{context}"] -= Configuring manually assigned egress IP addresses for a namespace - -In {product-title} you can associate one or more egress IP addresses with a namespace. - -.Prerequisites - -* You have access to the cluster as a user with the `cluster-admin` role. -* You have installed the OpenShift CLI (`oc`). - -.Procedure - -. Update the `NetNamespace` object by specifying the following JSON -object with the desired IP addresses: -+ -[source,terminal] ----- - $ oc patch netnamespace --type=merge -p \ - '{ - "egressIPs": [ - "" - ] - }' ----- -+ --- -where: - -``:: Specifies the name of the project. -``:: Specifies one or more egress IP addresses for the `egressIPs` array. --- -+ -For example, to assign the `project1` project to the IP addresses `192.168.1.100` and `192.168.1.101`: -+ -[source,terminal] ----- -$ oc patch netnamespace project1 --type=merge \ - -p '{"egressIPs": ["192.168.1.100","192.168.1.101"]}' ----- -+ -To provide high availability, set the `egressIPs` value to two or more IP addresses on different nodes. If multiple egress IP addresses are set, then pods use all egress IP addresses roughly equally. -+ -[NOTE] -==== -Because OpenShift SDN manages the `NetNamespace` object, you can make changes only by modifying the existing `NetNamespace` object. Do not create a new `NetNamespace` object. -==== - -. Manually assign the egress IP address to the node hosts. -+ -If your cluster is installed on public cloud infrastructure, you must confirm that the node has available IP address capacity. -+ -Set the `egressIPs` parameter on the `HostSubnet` object on the node host. Using the following JSON, include as many IP addresses as you want to assign to that node host: -+ -[source,terminal] ----- -$ oc patch hostsubnet --type=merge -p \ - '{ - "egressIPs": [ - "", - "" - ] - }' ----- -+ --- -where: - -``:: Specifies a node name. -``:: Specifies an IP address. You can specify more than one IP address for the `egressIPs` array. --- -+ -For example, to specify that `node1` should have the egress IPs `192.168.1.100`, -`192.168.1.101`, and `192.168.1.102`: -+ -[source,terminal] ----- -$ oc patch hostsubnet node1 --type=merge -p \ - '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}' ----- -+ -In the previous example, all egress traffic for `project1` will be routed to the node hosting the specified egress IP, and then connected through Network Address Translation (NAT) to that IP address. diff --git a/modules/nw-egress-router-configmap.adoc b/modules/nw-egress-router-configmap.adoc deleted file mode 100644 index 34fe4629a990..000000000000 --- a/modules/nw-egress-router-configmap.adoc +++ /dev/null @@ -1,92 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="configuring-egress-router-configmap_{context}"] -= Configuring an egress router destination mappings with a config map - -For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. -An advantage of this approach is that permission to edit the config map can be delegated to users without `cluster-admin` privileges. Because the egress router pod requires a privileged container, it is not possible for users without `cluster-admin` privileges to edit the pod definition directly. - -[NOTE] -==== -The egress router pod does not automatically update when the config map changes. -You must restart the egress router pod to get updates. -==== - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* Log in as a user with `cluster-admin` privileges. - -.Procedure - -. Create a file containing the mapping data for the egress router pod, as in the following example: -+ ----- -# Egress routes for Project "Test", version 3 - -80 tcp 203.0.113.25 - -8080 tcp 203.0.113.26 80 -8443 tcp 203.0.113.26 443 - -# Fallback -203.0.113.27 ----- -+ -You can put blank lines and comments into this file. - -. Create a `ConfigMap` object from the file: -+ -[source,terminal] ----- -$ oc delete configmap egress-routes --ignore-not-found ----- -+ -[source,terminal] ----- -$ oc create configmap egress-routes \ - --from-file=destination=my-egress-destination.txt ----- -+ -In the previous command, the `egress-routes` value is the name of the `ConfigMap` object to create and `my-egress-destination.txt` is the name of the file that the data is read from. -+ -[TIP] -==== -You can alternatively apply the following YAML to create the config map: - -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: egress-routes -data: - destination: | - # Egress routes for Project "Test", version 3 - - 80 tcp 203.0.113.25 - - 8080 tcp 203.0.113.26 80 - 8443 tcp 203.0.113.26 443 - - # Fallback - 203.0.113.27 ----- -==== - -. Create an egress router pod definition and specify the `configMapKeyRef` stanza for the `EGRESS_DESTINATION` field in the environment stanza: -+ -[source,yaml] ----- -... -env: -- name: EGRESS_DESTINATION - valueFrom: - configMapKeyRef: - name: egress-routes - key: destination -... ----- diff --git a/modules/nw-egress-router-dest-var.adoc b/modules/nw-egress-router-dest-var.adoc deleted file mode 100644 index 48bf1bc9b5cf..000000000000 --- a/modules/nw-egress-router-dest-var.adoc +++ /dev/null @@ -1,107 +0,0 @@ -// Module included in the following assemblies: -// -// * // * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -// Every redirection mode supports an expanded environment variable - -// Conditional per flavor of Pod -ifeval::["{context}" == "deploying-egress-router-layer3-redirection"] -:redirect: -endif::[] -ifeval::["{context}" == "deploying-egress-router-http-redirection"] -:http: -endif::[] -ifeval::["{context}" == "deploying-egress-router-dns-redirection"] -:dns: -endif::[] - -[id="nw-egress-router-dest-var_{context}"] -= Egress destination configuration format - -ifdef::redirect[] -When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats: - -- ` ` - Incoming connections to the given `` should be redirected to the same port on the given ``. `` is either `tcp` or `udp`. -- ` ` - As above, except that the connection is redirected to a different `` on ``. -- `` - If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected. - -In the example that follows several rules are defined: - -- The first line redirects traffic from local port `80` to port `80` on `203.0.113.25`. -- The second and third lines redirect local ports `8080` and `8443` to remote ports `80` and `443` on `203.0.113.26`. -- The last line matches traffic for any ports not specified in the previous rules. - -.Example configuration -[source,text] ----- -80 tcp 203.0.113.25 -8080 tcp 203.0.113.26 80 -8443 tcp 203.0.113.26 443 -203.0.113.27 ----- -endif::redirect[] - -ifdef::http[] -When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny: - -- An IP address allows connections to that IP address, such as `192.168.1.1`. -- A CIDR range allows connections to that CIDR range, such as `192.168.1.0/24`. -- A hostname allows proxying to that host, such as `www.example.com`. -- A domain name preceded by `+*.+` allows proxying to that domain and all of its subdomains, such as `*.example.com`. -- A `!` followed by any of the previous match expressions denies the connection instead. -- If the last line is `*`, then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied. - -You can also use `*` to allow connections to all remote destinations. - -.Example configuration -[source,text] ----- -!*.example.com -!192.168.1.0/24 -192.168.2.1 -* ----- -endif::http[] - -ifdef::dns[] -When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name. - -An egress router pod supports the following formats for specifying port and destination mappings: - -Port and remote address:: - -You can specify a source port and a destination host by using the two field format: ` `. - -The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address. - -.Port and remote address pair example -[source,text] ----- -80 172.16.12.11 -100 example.com ----- - -Port, remote address, and remote port:: - -You can specify a source port, a destination host, and a destination port by using the three field format: ` `. - -The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port. - -.Port, remote address, and remote port example -[source,text] ----- -8080 192.168.60.252 80 -8443 web.example.com 443 ----- -endif::dns[] - -// unload flavors -ifdef::redirect[] -:!redirect: -endif::[] -ifdef::http[] -:!http: -endif::[] -ifdef::dns[] -:!dns: -endif::[] diff --git a/modules/nw-egress-router-dns-mode.adoc b/modules/nw-egress-router-dns-mode.adoc deleted file mode 100644 index 098ca56f3628..000000000000 --- a/modules/nw-egress-router-dns-mode.adoc +++ /dev/null @@ -1,68 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-router-dns-mode_{context}"] -= Deploying an egress router pod in DNS proxy mode - -In _DNS proxy mode_, an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* Log in as a user with `cluster-admin` privileges. - -.Procedure - -. Create an egress router pod. - -. Create a service for the egress router pod: - -.. Create a file named `egress-router-service.yaml` that contains the following YAML. Set `spec.ports` to the list of ports that you defined previously for the `EGRESS_DNS_PROXY_DESTINATION` environment variable. -+ -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: egress-dns-svc -spec: - ports: - ... - type: ClusterIP - selector: - name: egress-dns-proxy ----- -+ -For example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: egress-dns-svc -spec: - ports: - - name: con1 - protocol: TCP - port: 80 - targetPort: 80 - - name: con2 - protocol: TCP - port: 100 - targetPort: 100 - type: ClusterIP - selector: - name: egress-dns-proxy ----- - -.. To create the service, enter the following command: -+ -[source,terminal] ----- -$ oc create -f egress-router-service.yaml ----- -+ -Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address. diff --git a/modules/nw-egress-router-http-proxy-mode.adoc b/modules/nw-egress-router-http-proxy-mode.adoc deleted file mode 100644 index dda6db2be6da..000000000000 --- a/modules/nw-egress-router-http-proxy-mode.adoc +++ /dev/null @@ -1,62 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-router-http-proxy-mode_{context}"] -= Deploying an egress router pod in HTTP proxy mode - -In _HTTP proxy mode_, an egress router pod runs as an HTTP proxy on port `8080`. This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* Log in as a user with `cluster-admin` privileges. - -.Procedure - -. Create an egress router pod. - -. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: egress-1 -spec: - ports: - - name: http-proxy - port: 8080 <1> - type: ClusterIP - selector: - name: egress-1 ----- -<1> Ensure the `http` port is set to `8080`. - -. To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the `http_proxy` or `https_proxy` variables: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: app-1 - labels: - name: app-1 -spec: - containers: - env: - - name: http_proxy - value: http://egress-1:8080/ <1> - - name: https_proxy - value: http://egress-1:8080/ - ... ----- -<1> The service created in the previous step. -+ -[NOTE] -==== -Using the `http_proxy` and `https_proxy` environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod. -==== diff --git a/modules/nw-egress-router-pod.adoc b/modules/nw-egress-router-pod.adoc deleted file mode 100644 index 0ba1308d15b6..000000000000 --- a/modules/nw-egress-router-pod.adoc +++ /dev/null @@ -1,231 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -// Conditional per flavor of Pod -ifeval::["{context}" == "deploying-egress-router-layer3-redirection"] -:redirect: -:router-type: redirect -endif::[] -ifeval::["{context}" == "deploying-egress-router-http-redirection"] -:http: -:router-type: HTTP -endif::[] -ifeval::["{context}" == "deploying-egress-router-dns-redirection"] -:dns: -:router-type: DNS -endif::[] - -:egress-router-image-name: openshift4/ose-egress-router -:egress-router-image-url: registry.redhat.io/{egress-router-image-name} - -ifdef::http[] -:egress-http-proxy-image-name: openshift4/ose-egress-http-proxy -:egress-http-proxy-image-url: registry.redhat.io/{egress-http-proxy-image-name} -endif::[] -ifdef::dns[] -:egress-dns-proxy-image-name: openshift4/ose-egress-dns-proxy -:egress-dns-proxy-image-url: registry.redhat.io/{egress-dns-proxy-image-name} -endif::[] -ifdef::redirect[] -:egress-pod-image-name: openshift4/ose-pod -:egress-pod-image-url: registry.redhat.io/{egress-pod-image-name} -endif::[] - -// All the images are different for OKD -ifdef::openshift-origin[] - -:egress-router-image-name: openshift/origin-egress-router -:egress-router-image-url: {egress-router-image-name} - -ifdef::http[] -:egress-http-proxy-image-name: openshift/origin-egress-http-proxy -:egress-http-proxy-image-url: {egress-http-proxy-image-name} -endif::[] -ifdef::dns[] -:egress-dns-proxy-image-name: openshift/origin-egress-dns-proxy -:egress-dns-proxy-image-url: {egress-dns-proxy-image-name} -endif::[] -ifdef::redirect[] -:egress-pod-image-name: openshift/origin-pod -:egress-pod-image-url: {egress-pod-image-name} -endif::[] - -endif::openshift-origin[] - -[id="nw-egress-router-pod_{context}"] -= Egress router pod specification for {router-type} mode - -Define the configuration for an egress router pod in the `Pod` object. The following YAML describes the fields for the configuration of an egress router pod in {router-type} mode: - -// Because redirect needs privileged access to setup `EGRESS_DESTINATION` -// and the other modes do not, this ends up needing its own almost -// identical Pod. It's not possible to use conditionals for an unequal -// number of callouts. - -ifdef::redirect[] -[source,yaml,subs="attributes+"] ----- -apiVersion: v1 -kind: Pod -metadata: - name: egress-1 - labels: - name: egress-1 - annotations: - pod.network.openshift.io/assign-macvlan: "true" <1> -spec: - initContainers: - - name: egress-router - image: {egress-router-image-url} - securityContext: - privileged: true - env: - - name: EGRESS_SOURCE <2> - value: - - name: EGRESS_GATEWAY <3> - value: - - name: EGRESS_DESTINATION <4> - value: - - name: EGRESS_ROUTER_MODE - value: init - containers: - - name: egress-router-wait - image: {egress-pod-image-url} ----- -<1> The annotation tells {product-title} to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the `"true"` value. To have {product-title} create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, `eth1`. -<2> IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the `/24` suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the `EGRESS_GATEWAY` variable and no other hosts on the subnet. -<3> Same value as the default gateway used by the node. -<4> External server to direct traffic to. Using this example, connections to the pod are redirected to `203.0.113.25`, with a source IP address of `192.168.12.99`. - -.Example egress router pod specification -[source,yaml,subs="attributes+"] ----- -apiVersion: v1 -kind: Pod -metadata: - name: egress-multi - labels: - name: egress-multi - annotations: - pod.network.openshift.io/assign-macvlan: "true" -spec: - initContainers: - - name: egress-router - image: {egress-router-image-url} - securityContext: - privileged: true - env: - - name: EGRESS_SOURCE - value: 192.168.12.99/24 - - name: EGRESS_GATEWAY - value: 192.168.12.1 - - name: EGRESS_DESTINATION - value: | - 80 tcp 203.0.113.25 - 8080 tcp 203.0.113.26 80 - 8443 tcp 203.0.113.26 443 - 203.0.113.27 - - name: EGRESS_ROUTER_MODE - value: init - containers: - - name: egress-router-wait - image: {egress-pod-image-url} ----- -endif::redirect[] - -// Many conditionals because DNS offers one additional env variable. - -ifdef::dns,http[] -[source,yaml,subs="attributes+"] ----- -apiVersion: v1 -kind: Pod -metadata: - name: egress-1 - labels: - name: egress-1 - annotations: - pod.network.openshift.io/assign-macvlan: "true" <1> -spec: - initContainers: - - name: egress-router - image: {egress-router-image-url} - securityContext: - privileged: true - env: - - name: EGRESS_SOURCE <2> - value: - - name: EGRESS_GATEWAY <3> - value: - - name: EGRESS_ROUTER_MODE -ifdef::dns[] - value: dns-proxy -endif::dns[] -ifdef::http[] - value: http-proxy -endif::http[] - containers: - - name: egress-router-pod -ifdef::dns[] - image: {egress-dns-proxy-image-url} - securityContext: - privileged: true -endif::dns[] -ifdef::http[] - image: {egress-http-proxy-image-url} -endif::http[] - env: -ifdef::http[] - - name: EGRESS_HTTP_PROXY_DESTINATION <4> - value: |- - ... -endif::http[] -ifdef::dns[] - - name: EGRESS_DNS_PROXY_DESTINATION <4> - value: |- - ... - - name: EGRESS_DNS_PROXY_DEBUG <5> - value: "1" -endif::dns[] - ... ----- -<1> The annotation tells {product-title} to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the `"true"` value. To have {product-title} create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, `eth1`. -<2> IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the `/24` suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the `EGRESS_GATEWAY` variable and no other hosts on the subnet. -<3> Same value as the default gateway used by the node. -ifdef::http[] -<4> A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. -endif::http[] -ifdef::dns[] -<4> Specify a list of one or more proxy destinations. -<5> Optional: Specify to output the DNS proxy log output to `stdout`. -endif::dns[] -endif::[] - -// unload flavors -ifdef::redirect[] -:!redirect: -endif::[] -ifdef::http[] -:!http: -endif::[] -ifdef::dns[] -:!dns: -endif::[] -ifdef::router-type[] -:!router-type: -endif::[] - -// unload images -ifdef::egress-router-image-name[] -:!egress-router-image-name: -endif::[] -ifdef::egress-router-image-url[] -:!egress-router-image-url: -endif::[] -ifdef::egress-pod-image-name[] -:!egress-pod-image-name: -endif::[] -ifdef::egress-pod-image-url[] -:!egress-pod-image-url: -endif::[] diff --git a/modules/nw-egress-router-redirect-mode.adoc b/modules/nw-egress-router-redirect-mode.adoc deleted file mode 100644 index 7ee506882f99..000000000000 --- a/modules/nw-egress-router-redirect-mode.adoc +++ /dev/null @@ -1,46 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-router-redirect-mode_{context}"] -= Deploying an egress router pod in redirect mode - -In _redirect mode_, an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the `curl` command. For example: - -[source,terminal] ----- -$ curl ----- - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* Log in as a user with `cluster-admin` privileges. - -.Procedure - -. Create an egress router pod. - -. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: egress-1 -spec: - ports: - - name: http - port: 80 - - name: https - port: 443 - type: ClusterIP - selector: - name: egress-1 ----- -+ -Your pods can now connect to this service. Their connections are redirected to -the corresponding ports on the external server, using the reserved egress IP -address. diff --git a/modules/nw-ingress-integrating-route-secret-certificate.adoc b/modules/nw-ingress-integrating-route-secret-certificate.adoc deleted file mode 100644 index 4ac39bf62fa0..000000000000 --- a/modules/nw-ingress-integrating-route-secret-certificate.adoc +++ /dev/null @@ -1,6 +0,0 @@ -// -// * ingress/routes.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-ingress-integrating-route-secret-certificate_{context}"] -= Securing route with external certificates in TLS secrets diff --git a/modules/nw-load-balancing-about.adoc b/modules/nw-load-balancing-about.adoc deleted file mode 100644 index fc4e001185ea..000000000000 --- a/modules/nw-load-balancing-about.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-load-balancing-about_{context}"] -= Supported load balancers - -Load balancing distributes incoming network traffic across multiple servers to maintain the health and efficiency of your clusters by ensuring that no single server bears too much load. Load balancers are devices that perform load balancing. They act as intermediaries between clients and servers to manage and direct traffic based on predefined rules. - -{product-title} supports the following types of load balancers: - -* Classic Load Balancer (CLB) -* Elastic Load Balancing (ELB) -* Network Load Balancer (NLB) -* Application Load Balancer (ALB) - -ELB is the default load-balancer type for AWS routers. CLB is the default for self-managed environments. NLB is the default for Red Hat OpenShift Service on AWS (ROSA). - -[IMPORTANT] -==== -Use ALB in front of an application but not in front of a router. Using an ALB requires the AWS Load Balancer Operator add-on. This operator is not supported for all {aws-first} regions or for all {product-title} profiles. -==== \ No newline at end of file diff --git a/modules/nw-load-balancing-configure-define-type.adoc b/modules/nw-load-balancing-configure-define-type.adoc deleted file mode 100644 index 178e733d54ec..000000000000 --- a/modules/nw-load-balancing-configure-define-type.adoc +++ /dev/null @@ -1,24 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-load-balancing-configure-define-type_{context}"] -= Define the default load balancer type - -When installing the cluster, you can specify the type of load balancer that you want to use. The type of load balancer you choose at cluster installation gets applied to the entire cluster. - -This example shows how to define the default load-balancer type for a cluster deployed on {aws-short}.You can apply the procedure on other supported platforms. - -[source,yaml] ----- -apiVersion: v1 -kind: Network -metadata: - name: cluster -platform: - aws: <1> - lbType: classic <2> ----- -<1> The `platform` key represents the platform on which you have deployed your cluster. This example uses `aws`. -<2> The `lbType` key represents the load balancer type. This example uses the Classic Load Balancer, `classic`. \ No newline at end of file diff --git a/modules/nw-load-balancing-configure-specify-behavior.adoc b/modules/nw-load-balancing-configure-specify-behavior.adoc deleted file mode 100644 index 0420ac3ed7d4..000000000000 --- a/modules/nw-load-balancing-configure-specify-behavior.adoc +++ /dev/null @@ -1,35 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-load-balancing-configure-specify-behavior_{context}"] -= Specify load balancer behavior for an Ingress Controller - -After you install a cluster, you can configure your Ingress Controller to specify how services are exposed to external networks, so that you can better control the settings and behavior of a load balancer. - -[NOTE] -==== -Changing the load balancer settings on an Ingress Controller might override the load balancer settings you specified at installation. -==== - -[source,yaml] ----- -apiVersion: v1 -kind: Network -metadata: - name: cluster -endpointPublishingStrategy: - loadBalancer: <1> - dnsManagementPolicy: Managed - providerParameters: - aws: - classicLoadBalancer: <2> - connectionIdleTimeout: 0s - type: Classic - type: AWS - scope: External - type: LoadBalancerService ----- -<1> The `loadBalancer' field specifies the load balancer configuration settings. -<2> The `classicLoadBalancer` field sets the load balancer to `classic` and includes settings specific to the CLB on {aws-short}. \ No newline at end of file diff --git a/modules/nw-load-balancing-configure.adoc b/modules/nw-load-balancing-configure.adoc deleted file mode 100644 index b4d50bdfafc4..000000000000 --- a/modules/nw-load-balancing-configure.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-load-balancing-configure_{context}"] -= Configuring Load balancers - -You can define your default load-balancer type during cluster installation. After installation, you can configure your ingress controller to behave in a specific way that is not covered by the global platform configuration that you defined at cluster installation. \ No newline at end of file diff --git a/modules/nw-multitenant-global.adoc b/modules/nw-multitenant-global.adoc deleted file mode 100644 index 5a1f86f7fd2c..000000000000 --- a/modules/nw-multitenant-global.adoc +++ /dev/null @@ -1,26 +0,0 @@ -// Module included in the following assemblies: -// * networking/multitenant-isolation.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-multitenant-global_{context}"] -= Disabling network isolation for a project - -You can disable network isolation for a project. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* You must log in to the cluster with a user that has the `cluster-admin` role. - -.Procedure - -* Run the following command for the project: -+ -[source,terminal] ----- -$ oc adm pod-network make-projects-global ----- -+ -Alternatively, instead of specifying specific project names, you can use the -`--selector=` option to specify projects based upon an -associated label. diff --git a/modules/nw-multitenant-isolation.adoc b/modules/nw-multitenant-isolation.adoc deleted file mode 100644 index c7bb9217f718..000000000000 --- a/modules/nw-multitenant-isolation.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Module included in the following assemblies: -// * networking/multitenant-isolation.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-multitenant-isolation_{context}"] -= Isolating a project - -You can isolate a project so that pods and services in other projects cannot -access its pods and services. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* You must log in to the cluster with a user that has the `cluster-admin` role. - -.Procedure - -* To isolate the projects in the cluster, run the following command: -+ -[source,terminal] ----- -$ oc adm pod-network isolate-projects ----- -+ -Alternatively, instead of specifying specific project names, you can use the -`--selector=` option to specify projects based upon an -associated label. diff --git a/modules/nw-multitenant-joining.adoc b/modules/nw-multitenant-joining.adoc deleted file mode 100644 index 47a883dbb50b..000000000000 --- a/modules/nw-multitenant-joining.adoc +++ /dev/null @@ -1,37 +0,0 @@ -// Module included in the following assemblies: -// * networking/multitenant-isolation.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-multitenant-joining_{context}"] -= Joining projects - -You can join two or more projects to allow network traffic between pods and -services in different projects. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* You must log in to the cluster with a user that has the `cluster-admin` role. - -.Procedure - -. Use the following command to join projects to an existing project network: -+ -[source,terminal] ----- -$ oc adm pod-network join-projects --to= ----- -+ -Alternatively, instead of specifying specific project names, you can use the -`--selector=` option to specify projects based upon an -associated label. - -. Optional: Run the following command to view the pod networks that you have -joined together: -+ -[source,terminal] ----- -$ oc get netnamespaces ----- -+ -Projects in the same pod-network have the same network ID in the *NETID* column. diff --git a/modules/nw-ne-changes-externalip-ovn.adoc b/modules/nw-ne-changes-externalip-ovn.adoc deleted file mode 100644 index 715781025411..000000000000 --- a/modules/nw-ne-changes-externalip-ovn.adoc +++ /dev/null @@ -1,20 +0,0 @@ -// Module included in the following assemblies: -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: REFERENCE -[id="nw-ne-changes-externalip-ovn_{context}"] -= Understanding changes in external IP behavior with OVN-Kubernetes - -When migrating from OpenShift SDN to OVN-Kubernetes (OVN-K), services that use external IPs might become inaccessible across namespaces due to `NetworkPolicy` enforcement. - -In OpenShift SDN, external IPs were accessible across namespaces by default. However, in OVN-K, network policies strictly enforce multitenant isolation, preventing access to services exposed via external IPs from other namespaces. - -To ensure accessibility, consider the following alternatives: - -* Use an ingress or route: Instead of exposing services by using external IPs, configure an ingress or route to allow external access while maintaining security controls. - -* Adjust `NetworkPolicies`: Modify `NetworkPolicy` rules to explicitly allow access from required namespaces and ensure that traffic is allowed to the designated service ports. Without allowing traffic to the required ports, access might still be blocked, even if the namespace is explicitly allowed. - -* Use a `LoadBalancer` service: If applicable, deploy a `LoadBalancer` service instead of relying on external IPs. - -For more information on configuring NetworkPolicies, see "Configuring NetworkPolicies". diff --git a/modules/nw-ne-comparing-ingress-route.adoc b/modules/nw-ne-comparing-ingress-route.adoc deleted file mode 100644 index cc4def642704..000000000000 --- a/modules/nw-ne-comparing-ingress-route.adoc +++ /dev/null @@ -1,11 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -[id="nw-ne-comparing-ingress-route_{context}"] -= Comparing routes and Ingress -The Kubernetes Ingress resource in {product-title} implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. The most common way to manage Ingress traffic is with the Ingress Controller. You can scale and replicate this pod like any other regular pod. This router service is based on link:http://www.haproxy.org/[HAProxy], which is an open source load balancer solution. - -The {product-title} route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. - -Ingress traffic accesses services in the cluster through a route. Routes and Ingress are the main resources for handling Ingress traffic. Ingress provides features similar to a route, such as accepting external requests and delegating them based on the route. However, with Ingress you can only allow certain types of connections: HTTP/2, HTTPS and server name identification (SNI), and TLS with certificate. In {product-title}, routes are generated to meet the conditions specified by the Ingress resource. diff --git a/modules/nw-ne-openshift-dns.adoc b/modules/nw-ne-openshift-dns.adoc deleted file mode 100644 index 3bf03d6c308f..000000000000 --- a/modules/nw-ne-openshift-dns.adoc +++ /dev/null @@ -1,19 +0,0 @@ -// Module included in the following assemblies: -// * understanding-networking.adoc - - -[id="nw-ne-openshift-dns_{context}"] -= {product-title} DNS - -If you are running multiple services, such as front-end and back-end services for -use with multiple pods, environment variables are created for user names, -service IPs, and more so the front-end pods can communicate with the back-end -services. If the service is deleted and recreated, a new IP address can be -assigned to the service, and requires the front-end pods to be recreated to pick -up the updated values for the service IP environment variable. Additionally, the -back-end service must be created before any of the front-end pods to ensure that -the service IP is generated properly, and that it can be provided to the -front-end pods as an environment variable. - -For this reason, {product-title} has a built-in DNS so that the services can be -reached by the service DNS as well as the service IP/port. diff --git a/modules/nw-networking-glossary-terms.adoc b/modules/nw-networking-glossary-terms.adoc deleted file mode 100644 index f1b12665d6b7..000000000000 --- a/modules/nw-networking-glossary-terms.adoc +++ /dev/null @@ -1,118 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: REFERENCE -[id="nw-networking-glossary-terms_{context}"] -= Glossary of common terms for {product-title} networking - -This glossary defines common terms that are used in the networking content. - -authentication:: -To control access to an {product-title} cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an {product-title} cluster, you must authenticate to the {product-title} API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the {product-title} API. - -AWS Load Balancer Operator:: -The AWS Load Balancer (ALB) Operator deploys and manages an instance of the `aws-load-balancer-controller`. - -Cluster Network Operator:: -The Cluster Network Operator (CNO) deploys and manages the cluster network components in an {product-title} cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation. - -config map:: -A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type `ConfigMap`. Applications running in a pod can use this data. - -custom resource (CR):: -A CR is extension of the Kubernetes API. You can create custom resources. - -DNS:: -Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. - -DNS Operator:: -The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in {product-title}. - -deployment:: -A Kubernetes resource object that maintains the life cycle of an application. - -domain:: -Domain is a DNS name serviced by the Ingress Controller. - -egress:: -The process of data sharing externally through a network’s outbound traffic from a pod. - -External DNS Operator:: -The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to {product-title}. - -HTTP-based route:: -An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. - -Ingress:: -The Kubernetes Ingress resource in {product-title} implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. - -Ingress Controller:: -The Ingress Operator manages Ingress Controllers. Using an Ingress Controller is the most common way to allow external access to an {product-title} cluster. - -installer-provisioned infrastructure:: -The installation program deploys and configures the infrastructure that the cluster runs on. - -kubelet:: -A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. - -Kubernetes NMState Operator:: -The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the {product-title} cluster’s nodes with NMState. - -kube-proxy:: -Kube-proxy is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing. - -load balancers:: -{product-title} uses load balancers for communicating from outside the cluster with services running in the cluster. - -MetalLB Operator:: -As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type `LoadBalancer` is added to the cluster, MetalLB can add an external IP address for the service. - -multicast:: -With IP multicast, data is broadcast to many IP addresses simultaneously. - -namespaces:: -A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. - -networking:: -Network information of a {product-title} cluster. - -node:: -A worker machine in the {product-title} cluster. A node is either a virtual machine (VM) or a physical machine. - -{product-title} Ingress Operator:: -The Ingress Operator implements the `IngressController` API and is the component responsible for enabling external access to {product-title} services. - -pod:: -One or more containers with shared resources, such as volume and IP addresses, running in your {product-title} cluster. -A pod is the smallest compute unit defined, deployed, and managed. - -PTP Operator:: -The PTP Operator creates and manages the `linuxptp` services. - -route:: -The {product-title} route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. - -scaling:: -Increasing or decreasing the resource capacity. - -service:: -Exposes a running application on a set of pods. - -Single Root I/O Virtualization (SR-IOV) Network Operator:: -The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. - -software-defined networking (SDN):: -A software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the {product-title} cluster. - -Stream Control Transmission Protocol (SCTP):: -SCTP is a reliable message based protocol that runs on top of an IP network. - -taint:: -Taints and tolerations ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. - -toleration:: -You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. - -web console:: -A user interface (UI) to manage {product-title}. diff --git a/modules/nw-networkpolicy-optimize.adoc b/modules/nw-networkpolicy-optimize.adoc deleted file mode 100644 index c54f742829f0..000000000000 --- a/modules/nw-networkpolicy-optimize.adoc +++ /dev/null @@ -1,22 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/network_security/network_policy/about-network-policy.adoc - -[id="nw-networkpolicy-optimize-sdn_{context}"] -= Optimizations for network policy with OpenShift SDN - -Use a network policy to isolate pods that are differentiated from one another by labels within a namespace. - -It is inefficient to apply `NetworkPolicy` objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a `podSelector`. - -For example, if the spec `podSelector` and the ingress `podSelector` within a `NetworkPolicy` object each match 200 pods, then 40,000 (200*200) OVS flow rules are generated. This might slow down a node. - -When designing your network policy, refer to the following guidelines: - -* Reduce the number of OVS flow rules by using namespaces to contain groups of pods that need to be isolated. -+ -`NetworkPolicy` objects that select a whole namespace, by using the `namespaceSelector` or an empty `podSelector`, generate only a single OVS flow rule that matches the VXLAN virtual network ID (VNID) of the namespace. - -* Keep the pods that do not need to be isolated in their original namespace, and move the pods that require isolation into one or more different namespaces. - -* Create additional targeted cross-namespace network policies to allow the specific traffic that you do want to allow from the isolated pods. diff --git a/modules/nw-secondary-ext-gw-status.adoc b/modules/nw-secondary-ext-gw-status.adoc deleted file mode 100644 index 24c792804080..000000000000 --- a/modules/nw-secondary-ext-gw-status.adoc +++ /dev/null @@ -1,45 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/ovn_kubernetes_network_provider/configuring-secondary-external-gateway.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-secondary-ext-gw-status_{context}"] -= View the status of an external gateway - -You can view the status of an external gateway that is configured for your cluster. The `status` field for the `AdminPolicyBasedExternalRoute` custom resource reports recent status messages whenever you update the resource, subject to a few limitations: - -- Namespaces impacted are not reported in status messages -- Pods selected as part of a dynamic next hop configuration do not trigger status updates as a result of pod lifecycle events, such as pod termination - -.Prerequisites - -* You installed the OpenShift CLI (`oc`). -* You are logged in to the cluster with a user with `cluster-admin` privileges. - -.Procedure - -* To access the status logs for a secondary external gateway, enter the following command: -+ -[source,terminal] ----- -$ oc get adminpolicybasedexternalroutes -o yaml ----- -+ --- -where: - -``:: Specifies the name of an `AdminPolicyBasedExternalRoute` object. --- -+ -.Example output -[source,text] ----- -... -Status: - Last Transition Time: 2023-04-24T14:49:45Z - Messages: - Configured external gateway IPs: 172.18.0.8,172.18.0.9 - Configured external gateway IPs: 172.18.0.8 - Status: Success -Events: ----- diff --git a/modules/nw-sriov-about-all-multi-cast_mode.adoc b/modules/nw-sriov-about-all-multi-cast_mode.adoc deleted file mode 100644 index 39598acaffbc..000000000000 --- a/modules/nw-sriov-about-all-multi-cast_mode.adoc +++ /dev/null @@ -1,20 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/hardware_networks/configuring-interface-sysctl-sriov-device.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-about-all-one-sysctl-flag_{context}"] -= Setting one sysctl flag - -You can set interface-level network `sysctl` settings for a pod connected to a SR-IOV network device. - -In this example, `net.ipv4.conf.IFNAME.accept_redirects` is set to `1` on the created virtual interfaces. - -The `sysctl-tuning-test` namespace is used in this example. - -* Use the following command to create the `sysctl-tuning-test` namespace: -+ ----- -$ oc create namespace sysctl-tuning-test ----- - diff --git a/modules/nw-udn-examples.adoc b/modules/nw-udn-examples.adoc deleted file mode 100644 index 5e9409ed69de..000000000000 --- a/modules/nw-udn-examples.adoc +++ /dev/null @@ -1,67 +0,0 @@ -//module included in the following assembly: -// -// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc - -:_mod-docs-content-type: REFERENCE -[id="nw-udn-examples_{context}"] -= Configuration details and examples of UserDefinedNetworks - -The following sections includes configuration details and examples for creating user-defined networks (UDN) using the custom resource definition. - -[id=configuration-details-layer-two_{context}] -== Configuration details for Layer2 topology -The following rules apply when creating a UDN with a `Layer2` topology: - -* The `subnets` field is optional. -* The `subnets` field is of type `string` and accepts standard CIDR formats for both IPv4 and IPv6. -* The `subnets` field accepts one or two items. For two items, they must be of a different family. For example, `subnets` values of `10.100.0.0/16` and `2001:db8::/64`. -* `Layer2` subnets may be omitted. If omitted, users must configure IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. -* The `Layer2` `subnets` field is mandatory when `ipamLifecycle` is specified. - -.Example of UDN over `Layer2` topology -[%collapsible] -==== -[source,terminal] ----- -apiVersion: k8s.ovn.org/v1 -kind: UserDefinedNetwork -metadata: - name: udn-network-primary - namespace: -spec: - topology: Layer2 - layer2: - role: Primary - subnets: ["10.150.0.0/16"] ----- -==== - -[id=configuration-details-layer-three_{context}] -== Configuration details for Layer3 topology -The following rules apply when creating a UDN with a `Layer3` topology: - -* The `subnets` field is mandatory. -* The type for `subnets` field is `cidr` and `hostsubnet`: -+ -** `cidr` is the cluster subnet and accepts a string value. -** `hostSubnet` specifies the nodes subnet prefix that the cluster subnet is split to. - -.Example of UDN over `Layer3` topology -[%collapsible] -==== -[source,terminal] ----- -apiVersion: k8s.ovn.org/v1 -kind: UserDefinedNetwork -metadata: - name: udn-network-primary - namespace: -spec: - topology: Layer3 - layer3: - role: Primary - subnets: - - cidr: 10.150.0.0/16 - hostsubnet: 24 ----- -==== \ No newline at end of file diff --git a/modules/nw-understanding-networking-choosing-service-types.adoc b/modules/nw-understanding-networking-choosing-service-types.adoc deleted file mode 100644 index cc26f952937d..000000000000 --- a/modules/nw-understanding-networking-choosing-service-types.adoc +++ /dev/null @@ -1,34 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-choosing-service-types_{context}"] -= Choosing between service types and API resources - -Service types and API resources offer different benefits for exposing applications and securing network connections. By leveraging the appropriate service type or API resource, you can effectively manage how your applications are exposed and ensure secure, reliable access for both internal and external clients. - -{product-title} supports the following service types and API resources: - -* Service Types - -** `ClusterIP` is intended for internal-only exposure. It is easy to set up and provides a stable internal IP address for accessing services within the cluster. `ClusterIP` is suitable for communication between services within the cluster. - -** `NodePort` allows external access by exposing the service on each node's IP at a static port. It is straightforward to set up and useful for development and testing. `NodePort` is good for simple external access without the need for a load balancer from the cloud provider. - -** `LoadBalancer` automatically provisions an external load balancer to distribute traffic across multiple nodes. -It is ideal for production environments where reliable, high-availability access is needed. - -** `ExternalName` maps a service to an external DNS name to allow services outside the cluster to be accessed using the service's DNS name. It is good for integrating external services or legacy systems with the cluster. - -** Headless service is a DNS name that returns the list of pod IPs without providing a stable `ClusterIP`. This is ideal for stateful applications or scenarios where direct access to individual pod IPs is needed. - -* API Resources - -** `Ingress` provides control over routing HTTP and HTTPS traffic, including support for load balancing, SSL/TLS termination, and name-based virtual hosting. It is more flexible than services alone and supports multiple domains and paths. `Ingress` is ideal when complex routing is required. - -** `Route` is similar to `Ingress` but provides additional features, including TLS re-encryption and passthrough. It simplifies the process of exposing services externally. `Route` is best for when you need advanced features, such as integrated certificate management. - -If you need a simple way to expose a service to external traffic, `Route` or `Ingress` might be the best choice. These resources can be managed by a namespace admin or developer. The easiest approach is to create a route, check its external DNS name, and configure your DNS to have a CNAME that points to the external DNS name. - -For HTTP/HTTPS/TLS, `Route` or `Ingress` should suffice. Anything else is more complex and requires a cluster admin to ensure ports are accessible or MetalLB is configured. `LoadBalancer` services are also an option in cloud environments or appropriately configured bare-metal environments. \ No newline at end of file diff --git a/modules/nw-understanding-networking-common-practices.adoc b/modules/nw-understanding-networking-common-practices.adoc deleted file mode 100644 index de0ce2ed8a6c..000000000000 --- a/modules/nw-understanding-networking-common-practices.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-common-practices_{context}"] -= Common practices for networking services - -In {product-title}, services create a single IP address for clients to use, even if multiple pods are providing that service. This abstraction enables seamless scaling, fault tolerance, and rolling upgrades without affecting clients. - -Network security policies manage traffic within the cluster. Network controls empower namespace administrators to define ingress and egress rules for their pods. By using network administration policies, cluster administrators can establish namespace policies, override namespace policies, or set default policies when none are defined. - -Egress firewall configurations control outbound traffic from pods. These configuration settings ensure that only authorized communication occurs. The ingress node firewall protects nodes by controlling incoming traffic. Additionally, the Universal Data Network manages data traffic across the cluster. \ No newline at end of file diff --git a/modules/nw-understanding-networking-concepts-components.adoc b/modules/nw-understanding-networking-concepts-components.adoc deleted file mode 100644 index e23c509e363a..000000000000 --- a/modules/nw-understanding-networking-concepts-components.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-concepts-components_{context}"] -= Networking concepts and components - -Networking in {product-title} uses several key components and concepts. - -* Pods and services are the smallest deployable units in Kubernetes, and services provide stable IP addresses and DNS names for sets of pods. Each pod in a cluster is assigned a unique IP address. Pods use IP addresses to communicate directly with other pods, regardless of which node they are on. The pod IP addresses will change when pods are destroyed and created. Services are also assigned unique IP addresses. A service is associated with the pods that can provide the service. When accessed, the service IP address provides a stable way to access pods by sending traffic to one of the pods that backs the service. - -* Route and Ingress APIs define rules that route HTTP, HTTPS, and TLS traffic to services within the cluster. {product-title} provides both Route and Ingress APIs as part of the default installation, but you can add third-party Ingress Controllers to the cluster. - -* The Container Network Interface (CNI) plugin manages the pod network to enable pod-to-pod communication. - -* The Cluster Network Operator (CNO) CNO manages the networking plugin components of a cluster. Using the CNO, you can set the network configuration, such as the pod network CIDR and service network CIDR. - -* DNS operators manage DNS services within the cluster to ensure that services are reachable by their DNS names. - -* Network controls define how pods are allowed to communicate with each other and with other network endpoints. These policies help secure the cluster by controlling traffic flow and enforcing rules for pod communication. - -* Load balancing distributes network traffic across multiple servers to ensure reliability and performance. - -* Service discovery is a mechanism for services to find and communicate with each other within the cluster. - -* The Ingress Operator uses {product-title} Route to manage the router and enable external access to cluster services. \ No newline at end of file diff --git a/modules/nw-understanding-networking-controls.adoc b/modules/nw-understanding-networking-controls.adoc deleted file mode 100644 index 547a7211bb8b..000000000000 --- a/modules/nw-understanding-networking-controls.adoc +++ /dev/null @@ -1,15 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-controls_{context}"] -= Network controls - -Network controls define rules for how pods are allowed to communicate with each other and with other network endpoints. Network controls are implemented at the network level to ensure that only allowed traffic can flow between pods. This helps secure the cluster by restricting traffic flow and preventing unauthorized access. - -* Admin network policies (ANP): ANPs are cluster-scoped custom resource definitions (CRDs). As a cluster administrator, you can use an ANP to define network policies at a cluster level. You cannot override these policies by using regular network policy objects. These policies enforce strict network security rules across the entire cluster. ANPs can specify ingress and egress rules to allow administrators to control the traffic that enters and leaves the cluster. - -* Egress firewall: The egress firewall restricts egress traffic leaving the cluster. With this firewall, administrators can limit the external hosts that pods can access from within the cluster. You can configure egress firewall policies to allow or deny traffic to specific IP ranges, DNS names, or external services. This helps prevent unauthorized access to external resources and ensures that only allowed traffic can leave the cluster. - -* Ingress node firewall: The ingress node firewall controls ingress traffic to the nodes in a cluster. With this firewall, administrators define rules that restrict which external hosts can initiate connections to the nodes. This helps protect the nodes from unauthorized access and ensures that only trusted traffic can reach the cluster. \ No newline at end of file diff --git a/modules/nw-understanding-networking-dns-example.adoc b/modules/nw-understanding-networking-dns-example.adoc deleted file mode 100644 index b2a2c78452f3..000000000000 --- a/modules/nw-understanding-networking-dns-example.adoc +++ /dev/null @@ -1,106 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-understanding-networking-dns-example_{context}"] -= Example: DNS use case - -For this example, a front-end application is running in one set of pods and a back-end service is running in another set of pods. The front-end application needs to communicate with the back-end service. You create a service for the back-end pods that gives it a stable IP address and DNS name. The front-end pods use this DNS name to access the back-end service regardless of changes to individual pod IP addresses. - -By creating a service for the back-end pods, you provide a stable IP and DNS name, `backend-service.default.svc.cluster.local`, that the front-end pods can use to communicate with the back-end service. This setup would ensure that even if individual pod IP addresses change, the communication remains consistent and reliable. - -The following steps demonstrate an example of how to configure front-end pods to communicate with a back-end service using DNS. - -. Create the back-end service. - -.. Deploy the back-end pods. -+ -[source, yaml] ----- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: backend-deployment - labels: - app: backend -spec: - replicas: 3 - selector: - matchLabels: - app: backend - template: - metadata: - labels: - app: backend - spec: - containers: - - name: backend-container - image: your-backend-image - ports: - - containerPort: 8080 ----- - -.. Define a service to expose the back-end pods. -+ -[source, yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: backend-service -spec: - selector: - app: backend - ports: - - protocol: TCP - port: 80 - targetPort: 8080 ----- - -. Create the front-end pods. - -.. Define the front-end pods. -+ -[source, yaml] ----- -apiVersion: apps/v1 -kind: Deployment -metadata: - name: frontend-deployment - labels: - app: frontend - spec: - replicas: 3 - selector: - matchLabels: - app: frontend - template: - metadata: - labels: - app: frontend - spec: - containers: - - name: frontend-container - image: your-frontend-image - ports: - - containerPort: 80 ----- - -.. Apply the pod definition to your cluster. -+ -[source,terminal] ----- -$ oc apply -f frontend-deployment.yaml ----- - -. Configure the front-end to communicate with the back-end. -+ -In your front-end application code, use the DNS name of the back-end service to send requests. For example, if your front-end application needs to fetch data from the back-end pod, your application might include the following code: -+ -[source, JavaScript] ----- -fetch('http://backend-service.default.svc.cluster.local/api/data') - .then(response => response.json()) - .then(data => console.log(data)); ----- \ No newline at end of file diff --git a/modules/nw-understanding-networking-dns-terms.adoc b/modules/nw-understanding-networking-dns-terms.adoc deleted file mode 100644 index 4611f512ed15..000000000000 --- a/modules/nw-understanding-networking-dns-terms.adoc +++ /dev/null @@ -1,19 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-dns-terms_{context}"] -= Key DNS terms - -* CoreDNS: CoreDNS is the DNS server and provides name resolution for services and pods. - -* DNS names: Services are assigned DNS names based on their namespace and name. For example, a service named `my-service` in the `default` namespace would have the DNS name `my-service.default.svc.cluster.local`. - -* Domain names: Domain names are the human-friendly names used to access websites and services, such as `example.com`. - -* IP addresses: IP addresses are numerical labels assigned to each device connected to a computer network that uses IP for communication. An example of an IPv4 address is `192.0.2.1`. An example of an IPv6 address is `2001:0db8:85a3:0000:0000:8a2e:0370:7334`. - -* DNS servers: DNS servers are specialized servers that store DNS records. These records map domain names to IP addresses. When you type a domain name into your browser, your computer contacts a DNS server to find the corresponding IP address. - -* Resolution process: A DNS query is sent to a DNS resolver. The DNS resolver then contacts a series of DNS servers to find the IP address associated with the domain name. The resolver will try using the name with a series of domains, such as `.svc.cluster.local`, `svc.cluster.local`, and `cluster.local`. This process stops at the first match. The IP address is returned to your browser and then connects to the web server using the IP address. diff --git a/modules/nw-understanding-networking-dns.adoc b/modules/nw-understanding-networking-dns.adoc deleted file mode 100644 index aa062fa907d2..000000000000 --- a/modules/nw-understanding-networking-dns.adoc +++ /dev/null @@ -1,11 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-dns_{context}"] -= The Domain Name System (DNS) - -The Domain Name System (DNS) is a hierarchical and decentralized naming system used to translate human-friendly domain names, such as www.example.com, into IP addresses that identify computers on a network. DNS plays a crucial role in service discovery and name resolution. - -{product-title} provides a built-in DNS to ensure that services can be reached by their DNS names. This helps maintain stable communication even if the underlying IP addresses change. When you start a pod, environment variables for service names, IP addresses, and ports are created automatically to enable the pod to communicate with other services. \ No newline at end of file diff --git a/modules/nw-understanding-networking-exposing-applications.adoc b/modules/nw-understanding-networking-exposing-applications.adoc deleted file mode 100644 index 211dbc0eb5c9..000000000000 --- a/modules/nw-understanding-networking-exposing-applications.adoc +++ /dev/null @@ -1,11 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-exposing-applications_{context}"] -= Exposing applications - -ClusterIP exposes services on an internal IP within the cluster to make the cluster accessible only to other services within the cluster. The NodePort service type exposes the service on a static port on each node's IP. This service type allows external traffic to access the service. Load balancers are typically used in cloud or bare-metal environments that use MetalLB. This service type provisions an external load balancer that routes external traffic to the service. On bare-metal environments, MetalLB uses VIPs and ARP announcements or BGP announcements. - -Ingress is an API object that manages external access to services, such as load balancing, SSL/TLS termination, and name-based virtual hosting. An Ingress Controller, such as NGINX or HAProxy, implements the Ingress API and handles traffic routing based on user-defined rules. \ No newline at end of file diff --git a/modules/nw-understanding-networking-features.adoc b/modules/nw-understanding-networking-features.adoc deleted file mode 100644 index 3009c753c464..000000000000 --- a/modules/nw-understanding-networking-features.adoc +++ /dev/null @@ -1,35 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-features_{context}"] -= Networking features - -{product-title} offers several networking features and enhancements. These features and enhancements are listed as follows: - -* Ingress Operator and Route API: {product-title} includes an Ingress Operator that implements the Ingress Controller API. This component enables external access to cluster services by deploying and managing HAProxy-based Ingress Controllers that support advanced routing configurations and load balancing. {product-title} uses the Route API to translate upstream Ingress objects to route objects. Routes are specific to networking in {product-title}, but you can also use third-party Ingress Controllers. - -* Enhanced security: {product-title} provides advanced network security features, such as the egress firewall and the ingress node firewall. -+ -** Egress firewall: The egress firewall controls and restricts outbound traffic from pods within the cluster. You can set rules to limit which external hosts or IP ranges with which pods can communicate. -** Ingress node firewall: The ingress node firewall is managed by the Ingress Firewall Operator and provides firewall rules at the node level. You can protect your nodes from threats by configuring this firewall on specific nodes within the cluster to filter incoming traffic before it reaches these nodes. -+ -[NOTE] -==== -{product-title} also implements services, such as Network Policy, Admin Network Policy, and Security Context Constraints (SCC) to secure communication between pods and enforce access controls. -==== - -* Role-based access control (RBAC): {product-title} extends Kubernetes RBAC to provide more granular control over who can access and manage network resources. RBAC helps maintain security and compliance within the cluster. - -* Multi-tenancy support: {product-title} offers multi-tenancy support to enable multiple users and teams to share the same cluster while keeping their resources isolated and secure. - -* Hybrid and multi-cloud capabilities: {product-title} is designed to work seamlessly across on-premise, cloud, and multi-cloud environments. This flexibility allows organizations to deploy and manage containerized applications across different infrastructures. - -* Observability and monitoring: {product-title} provides integrated observability and monitoring tools that help manage and troubleshoot network issues. These tools include role-based access to network metrics and logs. - -* User-defined networks (UDN): UDNs allow administrators to customize network configurations. UDNs provide enhanced network isolation and IP address management. - -* Egress IP: Egress IP allows you to assign a fixed source IP address for all egress traffic originating from pods within a namespace. Egress IP can improve security and access control by ensuring consistent source IP addresses for external services. For example, if a pod needs to access an external database that only allows traffic from specific IP adresses, you can configure an egress IP for that pod to meet the access requirements. - -* Egress router: An egress router is a pod that acts as a bridge between the cluster and external systems. Egress routers allow traffic from pods to be routed through a specific IP address that is not used for any other purpose. With egress routers, you can enforce access controls or route traffic through a specific gateway. diff --git a/modules/nw-understanding-networking-how-pods-communicate.adoc b/modules/nw-understanding-networking-how-pods-communicate.adoc deleted file mode 100644 index f53f1923be01..000000000000 --- a/modules/nw-understanding-networking-how-pods-communicate.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-how-pods-communicate_{context}"] -= How pods communicate - -Pods use IP addresses to communicate and a Dynamic Name System (DNS) to discover IP addresses for pods or services. Clusters use various policy types that control what communication is allowed. Pods communicate in two ways: pod-to-pod and service-to-pod. \ No newline at end of file diff --git a/modules/nw-understanding-networking-ingress.adoc b/modules/nw-understanding-networking-ingress.adoc deleted file mode 100644 index e063492f2747..000000000000 --- a/modules/nw-understanding-networking-ingress.adoc +++ /dev/null @@ -1,14 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-ingress_{context}"] -= Ingress - -Ingress is a resource that provides advanced routing capabilities, including load balancing, SSL/TLS termination, and name-based virtual hosting. Here are some key points about Ingress: - -* HTTP/HTTPS routing: You can use Ingress to define rules for routing HTTP and HTTPS traffic to services within the cluster. -* Load balancing: Ingress Controllers, such as NGINX or HAProxy, manage traffic routing and load balancing based on user-defined defined rules. -* SSL/TLS termination: SSL/TLS termination is the process of decrypting incoming SSL/TLS traffic before passing it to the backend services. -* Multiple domains and paths: Ingress supports routing traffic for multiple domains and paths. \ No newline at end of file diff --git a/modules/nw-understanding-networking-networking-in-OpenShift.adoc b/modules/nw-understanding-networking-networking-in-OpenShift.adoc deleted file mode 100644 index b1ce91277422..000000000000 --- a/modules/nw-understanding-networking-networking-in-OpenShift.adoc +++ /dev/null @@ -1,16 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-networking-in-OpenShift_{context}"] -= Networking in {product-title} - -{product-title} ensures seamless communication between various components within the cluster and between external clients and the cluster. Networking relies on the following core concepts and components: - -* Pod-to-pod communication -* Services -* DNS -* Ingress -* Network controls -* Load balancing \ No newline at end of file diff --git a/modules/nw-understanding-networking-pod-to-pod-example.adoc b/modules/nw-understanding-networking-pod-to-pod-example.adoc deleted file mode 100644 index c6d410a904e2..000000000000 --- a/modules/nw-understanding-networking-pod-to-pod-example.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-pod-to-pod-example_{context}"] -= Example: Controlling pod-to-pod communication - -In a microservices-based application with multiple pods, a frontend pod needs to communicate with the a backend pod to retrieve data. By using pod-to-pod communication, either directly or through services, these pods can efficiently exchange information. - -To control and secure pod-to-pod communication, you can define network controls. These controls enforce security and compliance requirements by specifying how pods interact with each other based on labels and selectors. - -[source, yaml] ----- -apiVersion: networking.k8s.io/v1 -kind: NetworkPolicy -metadata: - name: allow-some-pods - namespace: default -spec: - podSelector: - matchLabels: - role: app - ingress: - - from: - - podSelector: - matchLabels: - role: backend - ports: - - protocol: TCP - port: 80 ----- diff --git a/modules/nw-understanding-networking-pod-to-pod.adoc b/modules/nw-understanding-networking-pod-to-pod.adoc deleted file mode 100644 index a4bcffecbeb6..000000000000 --- a/modules/nw-understanding-networking-pod-to-pod.adoc +++ /dev/null @@ -1,11 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-pod-to-pod_{context}"] -= Pod-to-pod communication - -Pod-to-pod communication is the ability of pods to communicate with each other within the cluster. This is crucial for the functioning of microservices and distributed applications. - -Each pod in a cluster is assigned a unique IP address that they use to communicate directly with other pods. Pod-to-pod communication is useful for intra-cluster communication where pods need to exchange data or perform tasks collaboratively. For example, Pod A can send requests directly to Pod B using Pod B's IP address. Pods can communicate over a flat network without Network Address Translation (NAT). This allows for seamless communication between pods across different nodes. \ No newline at end of file diff --git a/modules/nw-understanding-networking-routes.adoc b/modules/nw-understanding-networking-routes.adoc deleted file mode 100644 index 0e5b0299a944..000000000000 --- a/modules/nw-understanding-networking-routes.adoc +++ /dev/null @@ -1,12 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-routes_{context}"] -= Routes - -Routes are specific to {product-title} resources that expose a service at a host name so that external clients can reach the service by name. - -Routes map a host name to a service. Route name mapping allows external clients to access the service using the host name. -Routes provide load balancing for the traffic directed to the service. The host name used in a route is resolved to the IP address of the router. Routes then forward the traffic to the appropriate service. Routes can also be secured using SSL/TLS to encrypt traffic between the client and the service. \ No newline at end of file diff --git a/modules/nw-understanding-networking-securing-connections.adoc b/modules/nw-understanding-networking-securing-connections.adoc deleted file mode 100644 index 0aec88276ecb..000000000000 --- a/modules/nw-understanding-networking-securing-connections.adoc +++ /dev/null @@ -1,17 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-securing-connections_{context}"] -= Securing connections - -Ingress Controllers manage SSL/TLS termination to decrypt incoming SSL/TLS traffic before passing it to the backend services. SSL/TLS termination offloads the encryption/decryption process from the application pods. You can use TLS certificates to encrypt traffic between clients and your services. You can manage certificates with tools, such as `cert-manager`, to automate certificate distribution and renewal. - -Routes pass TLS traffic to a pod if it has the SNI field. This process allows services that run TCP to be exposed using TLS and not only HTTP/HTTPS. A site administrator can manage the certificates centrally and allow application developers to read private keys even without permission. - -The Route API enables encryption of router-to-pod traffic with cluster-managed certificates. This ensures external certificates are centrally managed while the internal leg remains encrypted. Application developers receive unique private keys for their applications. These keys can be mounted as a secret in the pod. - -Network controls define rules for how pods can communicate with each other and other network endpoints. This enhances security by controlling traffic flow within the cluster. These controls are implemented at the network plugin level to ensure that only allowed traffic flows between pods. - -Role-based access control (RBAC) manages permissions and control who can access resources within the cluster. Service accounts provide identity for pods that access the API. RBAC allows granular control over what each pod can do. diff --git a/modules/nw-understanding-networking-security-example.adoc b/modules/nw-understanding-networking-security-example.adoc deleted file mode 100644 index ef80c695aae4..000000000000 --- a/modules/nw-understanding-networking-security-example.adoc +++ /dev/null @@ -1,78 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-understanding-networking-security-example_{context}"] -= Example: Exposing applications and securing connections - -In this example, a web application running in your cluster needs to be accessed by external users. - -. Create a service and expose the application as a service using a service type that suits your needs. -+ -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: my-web-app -spec: - type: LoadBalancer - selector: - app: my-web-app - ports: - - port: 80 - targetPort: 8080 ----- - -. Define an `Ingress` resource to manage HTTP/HTTPS traffic and route it to your service. -+ -[source,yaml] ----- -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: my-web-app-ingress - annotations: - kubernetes.io/ingress.class: "nginx" -spec: - rules: - - host: mywebapp.example.com - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: my-web-app - port: - number: 80 ----- - -. Configure TLS for your ingress to ensure secured, encrypted connections. -+ -[source,yaml] ----- -apiVersion: networking.k8s.io/v1 -kind: Ingress -metadata: - name: my-web-app-ingress - annotations: - kubernetes.io/ingress.class: "nginx" -spec: - tls: - - hosts: - - mywebapp.example.com - secretName: my-tls-secret - rules: - - host: mywebapp.example.com - http: - paths: - - path: / - pathType: Prefix - backend: - service: - name: my-web-app - port: - number: 80 ----- \ No newline at end of file diff --git a/modules/nw-understanding-networking-security.adoc b/modules/nw-understanding-networking-security.adoc deleted file mode 100644 index 874e3d02785e..000000000000 --- a/modules/nw-understanding-networking-security.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-security_{context}"] -= Security and traffic management - -Administrators can expose applications to external traffic and secure network connections using service types, such as `ClusterIP`, `NodePort`, and `LoadBalaner` and API resources such as `Ingress` and `Route`. The Ingress Operator and Cluster Network Operator (CNO) help configure and manage these services and resources. The Ingress Operator deploys and manages one or more Ingress Controllers. These controllers route external HTTP and HTTPS traffic to services within the cluster. A CNO deploys and manages the cluster network components, including pod networks, service networks, and DNS. \ No newline at end of file diff --git a/modules/nw-understanding-networking-service-to-pod-example.adoc b/modules/nw-understanding-networking-service-to-pod-example.adoc deleted file mode 100644 index af57d39699d9..000000000000 --- a/modules/nw-understanding-networking-service-to-pod-example.adoc +++ /dev/null @@ -1,52 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-understanding-networking-service-to-pod-example_{context}"] -= Example: Controlling service-to-pod communication - -A cluster is running a microservices-based application with two components: a front-end and a backend. The front-end needs to communicate with the backend to fetch data. - -.Procedure - -. Create a backend service. -+ -[source, yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: backend -spec: - selector: - app: backend - ports: - - protocol: TCP - port: 80 - targetPort: 8080 ----- - -. Configure backend pods. -+ -[source, yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: backend-pod - labels: - app: backend -spec: - containers: - - name: backend-container - image: my-backend-image - ports: - - containerPort: 8080 ----- - -. Establish front-end communication. -+ -The front-end pods can now use the DNS name `backend.default.svc.cluster.local` to communicate with the backend service. The service ensures that the traffic is routed to one of the backend pods. - -Service-to-pod communication abstracts the complexity of managing pod IPs and ensures reliable and efficient communication within the cluster. diff --git a/modules/nw-understanding-networking-service-to-pod.adoc b/modules/nw-understanding-networking-service-to-pod.adoc deleted file mode 100644 index 016ea04e570e..000000000000 --- a/modules/nw-understanding-networking-service-to-pod.adoc +++ /dev/null @@ -1,56 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-service-to-pod_{context}"] -= Service-to-pod communication - -Service-to-pod communication ensures that services can reliably route traffic to the appropriate pods. Services are objects that define a logical set of pods and provide a stable endpoint, such as IP addresses and DNS names. Pod IP addresses can change. Services abstract pod IP addresses to provide a consistent way to access the application components even as IP addresses change. - -Key concepts of service-to-pod communication include: - -* Endpoints: Endpoints define the IP addresses and ports of the pods that are associated with a service. - -* Selectors: Selectors use labels, such as key-value pairs, to define the criteria for selecting a set of objects that a service should target. - -* Services: Services provide a stable IP address and DNS name for a set of pods. This abstraction allows other components to communicate with the service rather than individual pods. - -* Service discovery: DNS makes services discoverable. When a service is created, it is assigned a DNS name. Other pods discover this DNS name and use it to communicate with the service. - -* Service Types: Service types control how services are exposed within or outside the cluster. - -** ClusterIP exposes the service on an internal cluster IP. It is the default service type and makes the service only reachable from within the cluster. - -** NodePort allows external traffic to access the service by exposing the service on each node's IP at a static port. - -** LoadBalancer uses a cloud provider's load balancer to expose the service externally. - -Services use selectors to identify the pods that should receive the traffic. The selectors match labels on the pods to determine which pods are part of the service. Example: A service with the selector `app: myapp` will route traffic to all pods with the label `app: myapp`. - -Endpoints are dynamically updated to reflect the current IP addresses of the pods that match the service selector. {product-title} maintains these endpoints and ensures that the service routes traffic to the correct pods. - -The communication flow refers to the sequence of steps and interactions that occur when a service in Kubernetes routes traffic to the appropriate pods. The typical communication flow for service-to-pod communication is as follows: - -* Service creation: When you create a service, you define the service type, the port on which the service listens, and the selector labels. -+ -[source, yaml] ----- - apiVersion: v1 - kind: Service - metadata: - name: my-service - spec: - selector: - app: myapp - ports: - - protocol: TCP - port: 80 - targetPort: 8080 ----- - -* DNS resolution: Each pod has a DNS name that other pods can use to communicate with the service. For example, if the service is named `my-service` in the `my-app` namespace, its DNS name is `my-service.my-app.svc.cluster.local`. - -* Traffic routing: When a pod sends a request to the service’s DNS name, {product-title} resolves the name to the service’s ClusterIP. The service then uses the endpoints to route the traffic to one of the pods that match its selector. - -* Load balancing: Services also provide basic load balancing. They distribute incoming traffic across all the pods that match the selector. This ensures that no single pod is overwhelmed with too much traffic. diff --git a/modules/nw-understanding-networking-what-is-a-client.adoc b/modules/nw-understanding-networking-what-is-a-client.adoc deleted file mode 100644 index 99789245a3f7..000000000000 --- a/modules/nw-understanding-networking-what-is-a-client.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-what-is-a-client_{context}"] -= Understanding external clients - -An external client is any entity outside the cluster that interacts with the services and applications running within the cluster. External can include end users, external services, and external devices. End users are people who access a web application hosted in the cluster through their browsers or mobile devices. External services are other software systems or applications that interact with the services in the cluster, often through APIs. External devices are any hardware outside the cluster network that needs to communicate with the cluster services, such as the Internet of Things (IoT) devices. \ No newline at end of file diff --git a/modules/nw-understanding-networking-what-is-a-cluster.adoc b/modules/nw-understanding-networking-what-is-a-cluster.adoc deleted file mode 100644 index 3da3dd10545a..000000000000 --- a/modules/nw-understanding-networking-what-is-a-cluster.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-what-is-a-cluster_{context}"] -= Understanding clusters - -A cluster is a collection of nodes that work together to run containerized applications. These nodes include control plane nodes and compute nodes. \ No newline at end of file diff --git a/modules/nw-understanding-networking-what-is-a-node.adoc b/modules/nw-understanding-networking-what-is-a-node.adoc deleted file mode 100644 index 6aeefd91cd25..000000000000 --- a/modules/nw-understanding-networking-what-is-a-node.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-understanding-networking-what-is-a-node_{context}"] -= What is a node? - -Nodes are the physical or virtual machines that run containerized applications. Nodes host the pods and provide resources, such as memory and storage for running the applications. Nodes enable communication between pods. Each pod is assigned an IP address. Pods within the same node can communicate with each other using these IP addresses. Nodes facilitate service discovery by allowing pods to discover and communicate with services within the cluster. Nodes help distribute network traffic among pods to ensure efficient load balancing and high availability of applications. Nodes provide a bridge between the internal cluster network and external networks to allowing external clients to access services running on the cluster. \ No newline at end of file diff --git a/modules/obtaining-value-cluster-id.adoc b/modules/obtaining-value-cluster-id.adoc deleted file mode 100644 index 649f333a5acd..000000000000 --- a/modules/obtaining-value-cluster-id.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Module included in the following assemblies: -// -// - -:_mod-docs-content-type: PROCEDURE -[id="obtaining-value-cluster-id_{context}"] -= Obtaining the cluster ID value - -You can find the cluster ID value by using the {oc-first}. - -.Prerequisites - -* You have deployed an {product-title} cluster. - -* You have access to the cluster using an account with `cluster-admin` permissions. - -* You have installed the {oc-first}. - -.Procedure - -* Obtain the value of the cluster ID by running the following command: -+ -[source,terminal] ----- -$ oc get infrastructure cluster \ - -o jsonpath='{.status.infrastructureName}' ----- \ No newline at end of file diff --git a/modules/patching-ovnk-address-ranges.adoc b/modules/patching-ovnk-address-ranges.adoc deleted file mode 100644 index 6b105cebd425..000000000000 --- a/modules/patching-ovnk-address-ranges.adoc +++ /dev/null @@ -1,43 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc - -:_mod-docs-content-type: PROCEDURE -[id="patching-ovnk-address-ranges_{context}"] -= Patching OVN-Kubernetes address ranges - -OVN-Kubernetes reserves the following IP address ranges: - -* `100.64.0.0/16`. This IP address range is used for the `internalJoinSubnet` parameter of OVN-Kubernetes by default. -* `100.88.0.0/16`. This IP address range is used for the `internalTransSwitchSubnet` parameter of OVN-Kubernetes by default. - -If these IP addresses have been used by OpenShift SDN or any external networks that might communicate with this cluster, you must patch them to use a different IP address range before initiating the limited live migration. - -The following procedure can be used to patch CIDR ranges that are in use by OpenShift SDN if the migration was initially blocked. - -[NOTE] -==== -Only use this optional procedure if your cluster or network environment overlaps with the `100.64.0.0/16` subnet or the `100.88.0.0/16` subnet. Ensure that you run the steps in the procedure before you start the limited live migration operation. -==== - -.Prerequisites - -* You have access to the cluster as a user with the `cluster-admin` role. - -.Procedure - -. If the `100.64.0.0/16` IP address range is already in use, enter the following command to patch it to a different range. The following example uses `100.63.0.0/16`. -+ -[source,terminal] ----- -$ oc patch network.operator.openshift.io cluster --type='merge' -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipv4":{"internalJoinSubnet": "100.63.0.0/16"}}}}}' ----- - -. If the `100.88.0.0/16` IP address range is already in use, enter the following command to patch it to a different range. The following example uses `100.99.0.0/16`. -+ -[source,terminal] ----- -$ oc patch network.operator.openshift.io cluster --type='merge' -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipv4":{"internalTransitSwitchSubnet": "100.99.0.0/16"}}}}}' ----- - -After patching the `100.64.0.0/16` and `100.88.0.0/16` IP address ranges, you can initiate the limited live migration. \ No newline at end of file diff --git a/modules/persistent-storage-csi-efs-create-volume.adoc b/modules/persistent-storage-csi-efs-create-volume.adoc deleted file mode 100644 index fc80837128a5..000000000000 --- a/modules/persistent-storage-csi-efs-create-volume.adoc +++ /dev/null @@ -1,53 +0,0 @@ -// Module included in the following assemblies: -// -// * storage/persistent_storage/persistent-storage-csi-aws-efs.adoc -// * storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc -// * storage/container_storage_interface/osd-persistent-storage-aws-efs-csi.adoc - -:_mod-docs-content-type: PROCEDURE -[id="efs-create-volume_{context}"] -= Creating and configuring access to EFS volumes in AWS - -This procedure explains how to create and configure EFS volumes in AWS so that you can use them in {product-title}. - -.Prerequisites - -* AWS account credentials - -.Procedure - -To create and configure access to an EFS volume in AWS: - -. On the AWS console, open https://console.aws.amazon.com/efs. - -. Click *Create file system*: -+ -* Enter a name for the file system. - -* For *Virtual Private Cloud (VPC)*, select the virtual private cloud (VPC) for your {product-title} cluster. - -* Accept default settings for all other selections. - -. Wait for the volume and mount targets to finish being fully created: - -.. Go to https://console.aws.amazon.com/efs#/file-systems. - -.. Click your volume, and on the *Network* tab wait for all mount targets to become available (~1-2 minutes). - -. On the *Network* tab, copy the Security Group ID (you will need this in the next step). - -. Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups, and find the Security Group used by the EFS volume. - -. On the *Inbound rules* tab, click *Edit inbound rules*, and then add a new rule with the following settings to allow {product-title} nodes to access EFS volumes : -+ -* *Type*: NFS - -* *Protocol*: TCP - -* *Port range*: 2049 - -* *Source*: Custom/IP address range of your nodes (for example: “10.0.0.0/16”) -+ -This step allows {product-title} to use NFS ports from the cluster. - -. Save the rule. diff --git a/modules/preferred-ibm-z-system-requirements.adoc b/modules/preferred-ibm-z-system-requirements.adoc deleted file mode 100644 index 0600387afbb5..000000000000 --- a/modules/preferred-ibm-z-system-requirements.adoc +++ /dev/null @@ -1,104 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_ibm_z/installing-ibm-z.adoc -// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc -// * installing/installing_ibm_z/installing-ibm-z-lpar.adoc -// * installing/installing_ibm_z/installing-restricted-networks-ibm-z-lpar.adoc - -ifeval::["{context}" == "installing-ibm-z"] -:ibm-z: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:ibm-z: -endif::[] -ifeval::["{context}" == "installing-ibm-z-lpar"] -:ibm-z-lpar: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z-lpar"] -:ibm-z-lpar: -endif::[] - -:_mod-docs-content-type: CONCEPT -[id="preferred-ibm-z-system-requirements_{context}"] -= Preferred {ibm-z-title} system environment - - -== Hardware requirements - -* Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. -* Two network connections to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster. -ifdef::ibm-z[] -* HiperSockets that are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a {op-system-base} 8 guest to bridge to the HiperSockets network. -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* HiperSockets that are attached to a node directly as a device. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a {op-system-base} 8 guest to bridge to the HiperSockets network. -endif::ibm-z-lpar[] - - -== Operating system requirements - -ifdef::ibm-z[] -* Two or three instances of z/VM 7.2 or later for high availability - -On your z/VM instances, set up: - -* Three guest virtual machines for {product-title} control plane machines, one per z/VM instance. -* At least six guest virtual machines for {product-title} compute machines, distributed across the z/VM instances. -* One guest virtual machine for the temporary {product-title} bootstrap machine. -* To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command `SET SHARE`. Do the same for infrastructure nodes, if they exist. See link:https://www.ibm.com/docs/en/zvm/latest?topic=commands-set-share[SET SHARE] in {ibm-name} Documentation. -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* Three LPARs for {product-title} control plane machines. -* At least six LPARs for {product-title} compute machines. -* One machine or LPAR for the temporary {product-title} bootstrap machine. -endif::ibm-z-lpar[] - - -== {ibm-z-title} network connectivity requirements - -ifdef::ibm-z[] -To install on {ibm-z-name} under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: - -* A direct-attached OSA or RoCE network adapter -* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. -endif::ibm-z[] -ifdef::ibm-z-lpar[] -To install on {ibm-z-name} in an LPAR, you need: - -* A direct-attached OSA or RoCE network adapter -* For a preferred setup, use OSA link aggregation. -endif::ibm-z-lpar[] - - -=== Disk storage - -ifdef::ibm-z[] -* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. -* FCP attached disk storage -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. -* FCP attached disk storage -* NVMe disk storage -endif::ibm-z-lpar[] - - -=== Storage / Main Memory - -* 16 GB for {product-title} control plane machines -* 8 GB for {product-title} compute machines -* 16 GB for the temporary {product-title} bootstrap machine - -ifeval::["{context}" == "installing-ibm-z"] -:!ibm-z: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:!ibm-z: -endif::[] -ifeval::["{context}" == "installing-ibm-z-lpar"] -:!ibm-z-lpar: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:!ibm-z-lpar: -endif::[] - diff --git a/modules/providing-direct-documentation-feedback.adoc b/modules/providing-direct-documentation-feedback.adoc deleted file mode 100644 index b4bd0eab4840..000000000000 --- a/modules/providing-direct-documentation-feedback.adoc +++ /dev/null @@ -1,24 +0,0 @@ -:_module-type: CONCEPT - -[id="providing-direct-documentation-feedback_{context}"] -= Providing feedback on Red Hat documentation - -[role="_abstract"] -We appreciate your feedback on our technical content and encourage you to tell us what you think. -If you'd like to add comments, provide insights, correct a typo, or even ask a question, you can do so directly in the documentation. - -[NOTE] -==== -You must have a Red Hat account and be logged in to the customer portal. -==== - -To submit documentation feedback from the customer portal, do the following: - -. Select the *Multi-page HTML* format. -. Click the *Feedback* button at the top-right of the document. -. Highlight the section of text where you want to provide feedback. -. Click the *Add Feedback* dialog next to your highlighted text. -. Enter your feedback in the text box on the right of the page and then click *Submit*. - -We automatically create a tracking issue each time you submit feedback. -Open the link that is displayed after you click *Submit* and start watching the issue or add more comments. diff --git a/modules/rhel-adding-more-nodes.adoc b/modules/rhel-adding-more-nodes.adoc deleted file mode 100644 index b0850ae47a19..000000000000 --- a/modules/rhel-adding-more-nodes.adoc +++ /dev/null @@ -1,62 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/more-rhel-compute.adoc - -:_mod-docs-content-type: PROCEDURE -[id="rhel-adding-more-nodes_{context}"] -= Adding more RHEL compute machines to your cluster - -You can add more compute machines that use Red Hat Enterprise Linux (RHEL) as the operating system to an {product-title} {product-version} cluster. - -.Prerequisites - -* Your {product-title} cluster already contains RHEL compute nodes. -* The `hosts` file that you used to add the first RHEL compute machines to your cluster is on the machine that you use the run the playbook. -* The machine that you run the playbook on must be able to access all of the RHEL hosts. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. -* The `kubeconfig` file for the cluster and the installation program that you used to install the cluster are on the machine that you use the run the playbook. -* You must prepare the RHEL hosts for installation. -* Configure a user on the machine that you run the playbook on that has SSH access to all of the RHEL hosts. -* If you use SSH key-based authentication, you must manage the key with an SSH agent. -* Install the OpenShift CLI (`oc`) on the machine that you run the playbook on. - - -.Procedure - -. Open the Ansible inventory file at `//inventory/hosts` that defines your compute machine hosts and required variables. - -. Rename the `[new_workers]` section of the file to `[workers]`. - -. Add a `[new_workers]` section to the file and define the fully-qualified domain names for each new host. The file resembles the following example: -+ ----- -[all:vars] -ansible_user=root -#ansible_become=True - -openshift_kubeconfig_path="~/.kube/config" - -[workers] -mycluster-rhel8-0.example.com -mycluster-rhel8-1.example.com - -[new_workers] -mycluster-rhel8-2.example.com -mycluster-rhel8-3.example.com ----- -+ -In this example, the `mycluster-rhel8-0.example.com` and `mycluster-rhel8-1.example.com` machines are in the cluster and you add the `mycluster-rhel8-2.example.com` and `mycluster-rhel8-3.example.com` machines. - -. Navigate to the Ansible playbook directory: -+ -[source,terminal] ----- -$ cd /usr/share/ansible/openshift-ansible ----- - -. Run the scaleup playbook: -+ -[source,terminal] ----- -$ ansible-playbook -i //inventory/hosts playbooks/scaleup.yml <1> ----- -<1> For ``, specify the path to the Ansible inventory file that you created. diff --git a/modules/rhel-adding-node.adoc b/modules/rhel-adding-node.adoc deleted file mode 100644 index 6a5c6fcdc33c..000000000000 --- a/modules/rhel-adding-node.adoc +++ /dev/null @@ -1,52 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * post_installation_configuration/node-tasks.adoc - -:_mod-docs-content-type: PROCEDURE -[id="rhel-adding-node_{context}"] -= Adding a RHEL compute machine to your cluster - -You can add compute machines that use Red Hat Enterprise Linux as the operating system to an {product-title} {product-version} cluster. - -.Prerequisites - -* You installed the required packages and performed the necessary configuration on the machine that you run the playbook on. -* You prepared the RHEL hosts for installation. - -.Procedure - -Perform the following steps on the machine that you prepared to run the playbook: - -. Create an Ansible inventory file that is named `//inventory/hosts` that defines your compute machine hosts and required variables: -+ ----- -[all:vars] -ansible_user=root <1> -#ansible_become=True <2> - -openshift_kubeconfig_path="~/.kube/config" <3> - -[new_workers] <4> -mycluster-rhel8-0.example.com -mycluster-rhel8-1.example.com ----- -<1> Specify the user name that runs the Ansible tasks on the remote compute machines. -<2> If you do not specify `root` for the `ansible_user`, you must set `ansible_become` to `True` and assign the user sudo permissions. -<3> Specify the path and file name of the `kubeconfig` file for your cluster. -<4> List each RHEL machine to add to your cluster. You must provide the fully-qualified domain name for each host. This name is the hostname that the cluster uses to access the machine, so set the correct public or private name to access the machine. - -. Navigate to the Ansible playbook directory: -+ -[source,terminal] ----- -$ cd /usr/share/ansible/openshift-ansible ----- - -. Run the playbook: -+ -[source,terminal] ----- -$ ansible-playbook -i //inventory/hosts playbooks/scaleup.yml <1> ----- -<1> For ``, specify the path to the Ansible inventory file that you created. diff --git a/modules/rhel-ansible-parameters.adoc b/modules/rhel-ansible-parameters.adoc deleted file mode 100644 index 802ca2aa6c38..000000000000 --- a/modules/rhel-ansible-parameters.adoc +++ /dev/null @@ -1,28 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * machine_management/more-rhel-compute.adoc -// * post_installation_configuration/node-tasks.adoc - -[id="rhel-ansible-parameters_{context}"] -= Required parameters for the Ansible hosts file - -You must define the following parameters in the Ansible hosts file before you add Red Hat Enterprise Linux (RHEL) compute machines to your cluster. - -[cols="1,2,2",options="header"] -|=== -|Parameter |Description |Values - -|`ansible_user` -|The SSH user that allows SSH-based authentication without requiring a password. If you use SSH key-based authentication, then you must manage the key with an SSH agent. -|A user name on the system. The default value is `root`. - -|`ansible_become` -|If the values of `ansible_user` is not root, you must set `ansible_become` to `True`, and the user that you specify as the `ansible_user` must be configured for passwordless sudo access. -|`True`. If the value is not `True`, do not specify and define this parameter. - -|`openshift_kubeconfig_path` -|Specifies a path and file name to a local directory that contains the `kubeconfig` file for your cluster. -|The path and name of the configuration file. - -|=== diff --git a/modules/rhel-attaching-instance-aws.adoc b/modules/rhel-attaching-instance-aws.adoc deleted file mode 100644 index 24f2394123d1..000000000000 --- a/modules/rhel-attaching-instance-aws.adoc +++ /dev/null @@ -1,15 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * machine_management/more-rhel-compute.adoc - - -:_mod-docs-content-type: PROCEDURE -[id="rhel-attaching-instance-aws_{context}"] -= Attaching the role permissions to {op-system-base} instance in AWS - -Using the Amazon IAM console in your browser, you may select the needed roles and assign them to a worker node. - -.Procedure -. From the AWS IAM console, create your link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#create-iam-role[desired IAM role]. -. link:https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html#attach-iam-role[Attach the IAM role] to the desired worker node. diff --git a/modules/rhel-compute-about-hooks.adoc b/modules/rhel-compute-about-hooks.adoc deleted file mode 100644 index 2290996138f0..000000000000 --- a/modules/rhel-compute-about-hooks.adoc +++ /dev/null @@ -1,25 +0,0 @@ -// Module included in the following assemblies: -// -// * updating/updating_a_cluster/updating-cluster-rhel-compute.adoc - -:_mod-docs-content-type: CONCEPT -[id="rhel-compute-about-hooks_{context}"] -= About Ansible hooks for updates - -When you update {product-title}, you can run custom tasks on your Red Hat -Enterprise Linux (RHEL) nodes during specific operations by using _hooks_. Hooks -allow you to provide files that define tasks to run before or after specific -update tasks. You can use hooks to validate or modify custom -infrastructure when you update the RHEL compute nodes in you {product-title} -cluster. - -Because when a hook fails, the operation fails, you must design hooks that are -idempotent, or can run multiple times and provide the same results. - -Hooks have the following important limitations: -- Hooks do not have a defined or versioned interface. They can use internal -`openshift-ansible` variables, but it is possible that the variables will be -modified or removed in future {product-title} releases. -- Hooks do not have error handling, so an error in a hook halts the update -process. If you get an error, you must address the problem and then start the -update again. \ No newline at end of file diff --git a/modules/rhel-compute-available-hooks.adoc b/modules/rhel-compute-available-hooks.adoc deleted file mode 100644 index 8ca51b307ffc..000000000000 --- a/modules/rhel-compute-available-hooks.adoc +++ /dev/null @@ -1,42 +0,0 @@ -// Module included in the following assemblies: -// -// * updating/updating_a_cluster/updating-cluster-rhel-compute.adoc - -[id="rhel-compute-available-hooks_{context}"] -= Available hooks for RHEL compute machines - -You can use the following hooks when you update the Red Hat Enterprise Linux (RHEL) -compute machines in your {product-title} cluster. - - -[cols="1,1",options="header"] -|=== -|Hook name |Description - - -|`openshift_node_pre_cordon_hook` -a|- Runs *before* each node is cordoned. -- This hook runs against *each node* in serial. -- If a task must run against a different host, the task must use -link:https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html[`delegate_to` or `local_action`]. - -|`openshift_node_pre_upgrade_hook` -a|- Runs *after* each node is cordoned but *before* it is updated. -- This hook runs against *each node* in serial. -- If a task must run against a different host, the task must use -link:https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html[`delegate_to` or `local_action`]. - -|`openshift_node_pre_uncordon_hook` -a|- Runs *after* each node is updated but *before* it is uncordoned. -- This hook runs against *each node* in serial. -- If a task must run against a different host, they task must use -link:https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html[`delegate_to` or `local_action`]. - -|`openshift_node_post_upgrade_hook` -a|- Runs *after* each node uncordoned. It is the *last* node update action. -- This hook runs against *each node* in serial. -- If a task must run against a different host, the task must use -link:https://docs.ansible.com/ansible/latest/user_guide/playbooks_delegation.html[`delegate_to` or `local_action`]. - -|=== - diff --git a/modules/rhel-compute-overview.adoc b/modules/rhel-compute-overview.adoc deleted file mode 100644 index 29683d11d5b2..000000000000 --- a/modules/rhel-compute-overview.adoc +++ /dev/null @@ -1,22 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * machine_management/more-rhel-compute.adoc -// * post_installation_configuration/node-tasks.adoc - -:_mod-docs-content-type: CONCEPT -[id="rhel-compute-overview_{context}"] -= About adding RHEL compute nodes to a cluster - -In {product-title} {product-version}, you have the option of using {op-system-base-full} machines as compute machines in your cluster if you use a user-provisioned or installer-provisioned infrastructure installation on the `x86_64` architecture. You must use {op-system-first} machines for the control plane machines in your cluster. - -If you choose to use {op-system-base} compute machines in your cluster, you are responsible for all operating system life cycle management and maintenance. You must perform system updates, apply patches, and complete all other required tasks. - -For installer-provisioned infrastructure clusters, you must manually add {op-system-base} compute machines because automatic scaling in installer-provisioned infrastructure clusters adds Red Hat Enterprise Linux CoreOS (RHCOS) compute machines by default. - -[IMPORTANT] -==== -* Because removing {product-title} from a machine in the cluster requires destroying the operating system, you must use dedicated hardware for any {op-system-base} machines that you add to the cluster. - -* Swap memory is disabled on all {op-system-base} machines that you add to your {product-title} cluster. You cannot enable swap memory on these machines. -==== \ No newline at end of file diff --git a/modules/rhel-compute-requirements.adoc b/modules/rhel-compute-requirements.adoc deleted file mode 100644 index 976e56716a42..000000000000 --- a/modules/rhel-compute-requirements.adoc +++ /dev/null @@ -1,51 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * machine_management/more-rhel-compute.adoc -// * post_installation_configuration/node-tasks.adoc - - -[id="rhel-compute-requirements_{context}"] -= System requirements for {op-system-base} compute nodes - -The {op-system-base-full} compute machine hosts in your {product-title} environment must meet the following minimum hardware specifications and system-level requirements: - -* You must have an active {product-title} subscription on your Red Hat account. If you do not, contact your sales representative for more information. - -* Production environments must provide compute machines to support your expected workloads. As a cluster administrator, you must calculate the expected workload and add about 10% for overhead. For production environments, allocate enough resources so that a node host failure does not affect your maximum capacity. -* Each system must meet the following hardware requirements: -** Physical or virtual system, or an instance running on a public or private IaaS. -ifdef::openshift-origin[] -** Base OS: CentOS 7.4. -endif::[] -ifdef::openshift-enterprise,openshift-webscale[] -** Base operating system: Use link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/performing_a_standard_rhel_8_installation/index[{op-system-base} 8.8 or a later version] with the minimal installation option. -+ -[IMPORTANT] -==== -Adding {op-system-base} 7 compute machines to an {product-title} cluster is not supported. - -If you have {op-system-base} 7 compute machines that were previously supported in a past {product-title} version, you cannot upgrade them to {op-system-base} 8. You must deploy new {op-system-base} 8 hosts, and the old {op-system-base} 7 hosts should be removed. See the "Deleting nodes" section for more information. - -For the most recent list of major functionality that has been deprecated or removed within {product-title}, refer to the _Deprecated and removed features_ section of the {product-title} release notes. -==== -** If you deployed {product-title} in FIPS mode, you must enable FIPS on the {op-system-base} machine before you boot it. See link:https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/security_hardening/assembly_installing-a-rhel-8-system-with-fips-mode-enabled_security-hardening[Installing a RHEL 8 system with FIPS mode enabled] in the {op-system-base} 8 documentation. -+ --- -include::snippets/fips-snippet.adoc[] --- - -endif::[] -** NetworkManager 1.0 or later. -** 1 vCPU. -** Minimum 8 GB RAM. -** Minimum 15 GB hard disk space for the file system containing `/var/`. -** Minimum 1 GB hard disk space for the file system containing `/usr/local/bin/`. -** Minimum 1 GB hard disk space for the file system containing its temporary directory. The temporary system directory is determined according to the rules defined in the tempfile module in the Python standard library. -* Each system must meet any additional requirements for your system provider. For example, if you installed your cluster on VMware vSphere, your disks must be configured according to its link:https://github.com/vmware-archive/vsphere-storage-for-kubernetes/blob/master/documentation/prerequisites.md[storage guidelines] and the `disk.enableUUID=TRUE` attribute must be set. - -* Each system must be able to access the cluster's API endpoints by using DNS-resolvable hostnames. Any network security access control that is in place must allow system access to the cluster's API service endpoints. - -* For clusters installed on {azure-first}: -** Ensure the system includes the hardware requirement of a `Standard_D8s_v3` virtual machine. -** Enable Accelerated Networking. Accelerated Networking uses single root I/O virtualization (SR-IOV) to provide {azure-full} VMs with a more direct path to the switch. diff --git a/modules/rhel-compute-updating.adoc b/modules/rhel-compute-updating.adoc deleted file mode 100644 index 32e210d8fe33..000000000000 --- a/modules/rhel-compute-updating.adoc +++ /dev/null @@ -1,147 +0,0 @@ -// Module included in the following assemblies: -// -// * updating/updating_a_cluster/updating-cluster-rhel-compute.adoc - -:_mod-docs-content-type: PROCEDURE -[id="rhel-compute-updating-minor_{context}"] -= Updating {op-system-base} compute machines in your cluster - -After you update your cluster, you must update the {op-system-base-full} compute machines in your cluster. - -[IMPORTANT] -==== -{op-system-base-full} versions 8.6 and later are supported for {op-system-base} compute machines. -==== - -You can also update your compute machines to another minor version of {product-title} if you are using {op-system-base} as the operating system. You do not need to exclude any RPM packages from {op-system-base} when performing a minor version update. - -[IMPORTANT] -==== -You cannot update {op-system-base} 7 compute machines to {op-system-base} 8. You must deploy new {op-system-base} 8 hosts, and the old {op-system-base} 7 hosts should be removed. -==== - -.Prerequisites - -* You updated your cluster. -+ -[IMPORTANT] -==== -Because the {op-system-base} machines require assets that are generated by the cluster to complete the update process, you must update the cluster before you update the {op-system-base} worker machines in it. -==== - -* You have access to the local machine that you used to add the {op-system-base} compute machines to your cluster. You must have access to the `hosts` Ansible inventory file that defines your {op-system-base} machines and the `upgrade` playbook. - -* For updates to a minor version, the RPM repository is using the same version of {product-title} that is running on your cluster. - -.Procedure - -. Stop and disable firewalld on the host: -+ -[source,terminal] ----- -# systemctl disable --now firewalld.service ----- -+ -[NOTE] -==== -By default, the base OS RHEL with "Minimal" installation option enables firewalld service. Having the firewalld service enabled on your host prevents you from accessing {product-title} logs on the worker. Do not enable firewalld later if you wish to continue accessing {product-title} logs on the worker. -==== - -. Enable the repositories that are required for {product-title} {product-version}: -.. On the machine that you run the Ansible playbooks, update the required repositories: -+ -[source,terminal,subs="attributes+"] ----- -# subscription-manager repos --disable=rhocp-4.17-for-rhel-8-x86_64-rpms \ - --enable=rhocp-{product-version}-for-rhel-8-x86_64-rpms ----- -+ -[IMPORTANT] -==== -As of {product-title} 4.11, the Ansible playbooks are provided only for {op-system-base} 8. If a {op-system-base} 7 system was used as a host for the {product-title} 4.10 Ansible playbooks, you must either update the Ansible host to {op-system-base} 8, or create a new Ansible host on a {op-system-base} 8 system and copy over the inventories from the old Ansible host. -==== - -.. On the machine that you run the Ansible playbooks, update the required packages, including `openshift-ansible`: -+ -[source,terminal] ----- -# yum update openshift-ansible openshift-clients ----- - -.. On each {op-system-base} compute node, update the required repositories: -+ -[source,terminal,subs="attributes+"] ----- -# subscription-manager repos --disable=rhocp-4.17-for-rhel-8-x86_64-rpms \ - --enable=rhocp-{product-version}-for-rhel-8-x86_64-rpms ----- - -. Update a {op-system-base} worker machine: - -.. Review your Ansible inventory file at `//inventory/hosts` and update its contents so that the {op-system-base} 8 machines are listed in the `[workers]` section, as shown in the following example: -+ ----- -[all:vars] -ansible_user=root -#ansible_become=True - -openshift_kubeconfig_path="~/.kube/config" - -[workers] -mycluster-rhel8-0.example.com -mycluster-rhel8-1.example.com -mycluster-rhel8-2.example.com -mycluster-rhel8-3.example.com ----- - -.. Change to the `openshift-ansible` directory: -+ -[source,terminal] ----- -$ cd /usr/share/ansible/openshift-ansible ----- - -.. Run the `upgrade` playbook: -+ -[source,terminal] ----- -$ ansible-playbook -i //inventory/hosts playbooks/upgrade.yml <1> ----- -<1> For ``, specify the path to the Ansible inventory file that you created. -+ -[NOTE] -==== -The `upgrade` playbook only updates the {product-title} packages. It does not update the operating system packages. -==== - -. After you update all of the workers, confirm that all of your cluster nodes have updated to the new version: -+ -[source,terminal] ----- -# oc get node ----- -+ -.Example output -[source,terminal] ----- -NAME STATUS ROLES AGE VERSION -mycluster-control-plane-0 Ready master 145m v1.33.4 -mycluster-control-plane-1 Ready master 145m v1.33.4 -mycluster-control-plane-2 Ready master 145m v1.33.4 -mycluster-rhel8-0 Ready worker 98m v1.33.4 -mycluster-rhel8-1 Ready worker 98m v1.33.4 -mycluster-rhel8-2 Ready worker 98m v1.33.4 -mycluster-rhel8-3 Ready worker 98m v1.33.4 ----- - -. Optional: Update the operating system packages that were not updated by the `upgrade` playbook. To update packages that are not on {product-version}, use the following command: -+ -[source,terminal] ----- -# yum update ----- -+ -[NOTE] -==== -You do not need to exclude RPM packages if you are using the same RPM repository that you used when you installed {product-version}. -==== diff --git a/modules/rhel-compute-using-hooks.adoc b/modules/rhel-compute-using-hooks.adoc deleted file mode 100644 index 87143fee7df1..000000000000 --- a/modules/rhel-compute-using-hooks.adoc +++ /dev/null @@ -1,54 +0,0 @@ -// Module included in the following assemblies: -// -// * updating/updating_a_cluster/updating-cluster-rhel-compute.adoc - -:_mod-docs-content-type: PROCEDURE -[id="rhel-compute-using-hooks_{context}"] -= Configuring the Ansible inventory file to use hooks - -You define the hooks to use when you update the Red Hat Enterprise Linux (RHEL) -compute machines, which are also known as worker machines, in the `hosts` inventory file under the `all:vars` -section. - -.Prerequisites - -* You have access to the machine that you used to add the RHEL compute machines -cluster. You must have access to the `hosts` Ansible inventory file that defines -your RHEL machines. - - -.Procedure - -. After you design the hook, create a YAML file that defines the Ansible tasks -for it. This file must be a set of tasks and cannot be a playbook, as shown in -the following example: -+ -[source.yaml] ----- ---- -# Trivial example forcing an operator to acknowledge the start of an upgrade -# file=/home/user/openshift-ansible/hooks/pre_compute.yml - -- name: note the start of a compute machine update - debug: - msg: "Compute machine upgrade of {{ inventory_hostname }} is about to start" - -- name: require the user agree to start an upgrade - pause: - prompt: "Press Enter to start the compute machine update" ----- - -. Modify the `hosts` Ansible inventory file to specify the hook files. The -hook files are specified as parameter values in the `[all:vars]` section, -as shown: -+ -.Example hook definitions in an inventory file -[source] ----- -[all:vars] -openshift_node_pre_upgrade_hook=/home/user/openshift-ansible/hooks/pre_node.yml -openshift_node_post_upgrade_hook=/home/user/openshift-ansible/hooks/post_node.yml ----- -+ -To avoid ambiguity in the paths to the hook, use absolute paths instead of a -relative paths in their definitions. diff --git a/modules/rhel-images-aws.adoc b/modules/rhel-images-aws.adoc deleted file mode 100644 index 481ef8d074f1..000000000000 --- a/modules/rhel-images-aws.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * machine_management/more-rhel-compute.adoc - -:_mod-docs-content-type: PROCEDURE -[id="rhel-images-aws_{context}"] -= Listing latest available RHEL images on AWS - -AMI IDs correspond to native boot images for AWS. Because an AMI must exist before the EC2 instance is provisioned, you will need to know the AMI ID before configuration. The link:https://aws.amazon.com/cli/[AWS Command Line Interface (CLI)] is used to list the available {op-system-base-full} image IDs. - -.Prerequisites - -* You have installed the AWS CLI. - -.Procedure - -* Use this command to list {op-system-base} 8.8 Amazon Machine Images (AMI): -+ --- -[source,terminal] ----- -$ aws ec2 describe-images --owners 309956199498 \ <1> ---query 'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]' \ <2> ---filters "Name=name,Values=RHEL-8.8*" \ <3> ---region us-east-1 \ <4> ---output table <5> ----- -<1> The `--owners` command option shows Red Hat images based on the account ID `309956199498`. -+ -[IMPORTANT] -==== -This account ID is required to display AMI IDs for images that are provided by Red Hat. -==== -<2> The `--query` command option sets how the images are sorted with the parameters `'sort_by(Images, &CreationDate)[*].[CreationDate,Name,ImageId]'`. In this case, the images are sorted by the creation date, and the table is structured to show the creation date, the name of the image, and the AMI IDs. -<3> The `--filter` command option sets which version of {op-system-base} is shown. In this example, since the filter is set by `"Name=name,Values=RHEL-8.8*"`, then {op-system-base} 8.8 AMIs are shown. -<4> The `--region` command option sets where the region where an AMI is stored. -<5> The `--output` command option sets how the results are displayed. --- - -[NOTE] -==== -When creating a {op-system-base} compute machine for AWS, ensure that the AMI is {op-system-base} 8.8 or a later version of RHEL 8. -==== - -.Example output -[source,terminal] ----- ------------------------------------------------------------------------------------------------------------- -| DescribeImages | -+---------------------------+-----------------------------------------------------+------------------------+ -| 2021-03-18T14:23:11.000Z | RHEL-8.8.0_HVM_BETA-20210309-x86_64-1-Hourly2-GP2 | ami-07eeb4db5f7e5a8fb | -| 2021-03-18T14:38:28.000Z | RHEL-8.8.0_HVM_BETA-20210309-arm64-1-Hourly2-GP2 | ami-069d22ec49577d4bf | -| 2021-05-18T19:06:34.000Z | RHEL-8.8.0_HVM-20210504-arm64-2-Hourly2-GP2 | ami-01fc429821bf1f4b4 | -| 2021-05-18T20:09:47.000Z | RHEL-8.8.0_HVM-20210504-x86_64-2-Hourly2-GP2 | ami-0b0af3577fe5e3532 | -+---------------------------+-----------------------------------------------------+------------------------+ ----- diff --git a/modules/rhel-preparing-node.adoc b/modules/rhel-preparing-node.adoc deleted file mode 100644 index c1fd07e0fb48..000000000000 --- a/modules/rhel-preparing-node.adoc +++ /dev/null @@ -1,92 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * machine_management/more-rhel-compute.adoc -// * post_installation_configuration/node-tasks.adoc - -[id="rhel-preparing-node_{context}"] -= Preparing a RHEL compute node - -Before you add a Red Hat Enterprise Linux (RHEL) machine to your {product-title} cluster, you must register each host with Red Hat Subscription Manager (RHSM), attach an active {product-title} subscription, and enable the required repositories. Ensure `NetworkManager` is enabled and configured to control all interfaces on the host. - -. On each host, register with RHSM: -+ -[source,terminal] ----- -# subscription-manager register --username= --password= ----- - -. Pull the latest subscription data from RHSM: -+ -[source,terminal] ----- -# subscription-manager refresh ----- - -. List the available subscriptions: -+ -[source,terminal] ----- -# subscription-manager list --available --matches '*OpenShift*' ----- - -. In the output for the previous command, find the pool ID for an {product-title} subscription and attach it: -+ -[source,terminal] ----- -# subscription-manager attach --pool= ----- - -. Disable all yum repositories: -.. Disable all the enabled RHSM repositories: -+ -[source,terminal] ----- -# subscription-manager repos --disable="*" ----- - -.. List the remaining yum repositories and note their names under `repo id`, if any: -+ -[source,terminal] ----- -# yum repolist ----- - -.. Use `yum-config-manager` to disable the remaining yum repositories: -+ -[source,terminal] ----- -# yum-config-manager --disable ----- -+ -Alternatively, disable all repositories: -+ -[source,terminal] ----- -# yum-config-manager --disable \* ----- -+ -Note that this might take a few minutes if you have a large number of available repositories - -. Enable only the repositories required by {product-title} {product-version}: -+ -[source,terminal,subs="attributes+"] ----- -# subscription-manager repos \ - --enable="rhel-8-for-x86_64-baseos-rpms" \ - --enable="rhel-8-for-x86_64-appstream-rpms" \ - --enable="rhocp-{product-version}-for-rhel-8-x86_64-rpms" \ - --enable="fast-datapath-for-rhel-8-x86_64-rpms" ----- - -. Stop and disable firewalld on the host: -+ -[source,terminal] ----- -# systemctl disable --now firewalld.service ----- -+ -[NOTE] -==== -You must not enable firewalld later. If you do, you cannot access {product-title} logs on the worker. -==== diff --git a/modules/rhel-preparing-playbook-machine.adoc b/modules/rhel-preparing-playbook-machine.adoc deleted file mode 100644 index f59cba01eca5..000000000000 --- a/modules/rhel-preparing-playbook-machine.adoc +++ /dev/null @@ -1,76 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * post_installation_configuration/node-tasks.adoc - -:_mod-docs-content-type: PROCEDURE -[id="rhel-preparing-playbook-machine_{context}"] -= Preparing the machine to run the playbook - -Before you can add compute machines that use {op-system-base-full} as the operating system to an {product-title} {product-version} cluster, you must prepare a {op-system-base} 8 machine to run an Ansible playbook that adds the new node to the cluster. This machine is not part of the cluster but must be able to access it. - -.Prerequisites - -* Install the OpenShift CLI (`oc`) on the machine that you run the playbook on. -* Log in as a user with `cluster-admin` permission. - -.Procedure - -. Ensure that the `kubeconfig` file for the cluster and the installation program that you used to install the cluster are on the {op-system-base} 8 machine. One way to accomplish this is to use the same machine that you used to install the cluster. - -. Configure the machine to access all of the {op-system-base} hosts that you plan to use as compute machines. You can use any method that your company allows, including a bastion with an SSH proxy or a VPN. - -. Configure a user on the machine that you run the playbook on that has SSH access to all of the {op-system-base} hosts. -+ -[IMPORTANT] -==== -If you use SSH key-based authentication, you must manage the key with an SSH agent. -==== - -. If you have not already done so, register the machine with RHSM and attach a pool with an `OpenShift` subscription to it: -.. Register the machine with RHSM: -+ -[source,terminal] ----- -# subscription-manager register --username= --password= ----- - -.. Pull the latest subscription data from RHSM: -+ -[source,terminal] ----- -# subscription-manager refresh ----- - -.. List the available subscriptions: -+ -[source,terminal] ----- -# subscription-manager list --available --matches '*OpenShift*' ----- - -.. In the output for the previous command, find the pool ID for an {product-title} subscription and attach it: -+ -[source,terminal] ----- -# subscription-manager attach --pool= ----- - -. Enable the repositories required by {product-title} {product-version}: -+ -[source,terminal,subs="attributes+"] ----- -# subscription-manager repos \ - --enable="rhel-8-for-x86_64-baseos-rpms" \ - --enable="rhel-8-for-x86_64-appstream-rpms" \ - --enable="rhocp-{product-version}-for-rhel-8-x86_64-rpms" ----- - -. Install the required packages, including `openshift-ansible`: -+ -[source,terminal] ----- -# yum install openshift-ansible openshift-clients jq ----- -+ -The `openshift-ansible` package provides installation program utilities and pulls in other packages that you require to add a {op-system-base} compute node to your cluster, such as Ansible, playbooks, and related configuration files. The `openshift-clients` provides the `oc` CLI, and the `jq` package improves the display of JSON output on your command line. diff --git a/modules/rhel-removing-rhcos.adoc b/modules/rhel-removing-rhcos.adoc deleted file mode 100644 index 9bb42cbe6a94..000000000000 --- a/modules/rhel-removing-rhcos.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * post_installation_configuration/node-tasks.adoc - -:_mod-docs-content-type: PROCEDURE -[id="rhel-removing-rhcos_{context}"] -= Optional: Removing RHCOS compute machines from a cluster - -After you add the Red Hat Enterprise Linux (RHEL) compute machines to your cluster, you can optionally remove the {op-system-first} compute machines to free up resources. - -.Prerequisites - -* You have added RHEL compute machines to your cluster. - -.Procedure - -. View the list of machines and record the node names of the {op-system} compute machines: -+ -[source,terminal] ----- -$ oc get nodes -o wide ----- - -. For each {op-system} compute machine, delete the node: -.. Mark the node as unschedulable by running the `oc adm cordon` command: -+ -[source,terminal] ----- -$ oc adm cordon <1> ----- -<1> Specify the node name of one of the {op-system} compute machines. - -.. Drain all the pods from the node: -+ -[source,terminal] ----- -$ oc adm drain --force --delete-emptydir-data --ignore-daemonsets <1> ----- -<1> Specify the node name of the {op-system} compute machine that you isolated. - -.. Delete the node: -+ -[source,terminal] ----- -$ oc delete nodes <1> ----- -<1> Specify the node name of the {op-system} compute machine that you drained. - -. Review the list of compute machines to ensure that only the RHEL nodes remain: -+ -[source,terminal] ----- -$ oc get nodes -o wide ----- - -. Remove the {op-system} machines from the load balancer for your cluster's compute machines. You can delete the virtual machines or reimage the physical hardware for the {op-system} compute machines. diff --git a/modules/rhel-worker-tag.adoc b/modules/rhel-worker-tag.adoc deleted file mode 100644 index 150312cf0c63..000000000000 --- a/modules/rhel-worker-tag.adoc +++ /dev/null @@ -1,23 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/adding-rhel-compute.adoc -// * machine_management/more-rhel-compute.adoc - - -:_mod-docs-content-type: PROCEDURE -[id="rhel-worker-tag_{context}"] -= Tagging a {op-system-base} worker node as owned or shared - -A cluster uses the value of the `kubernetes.io/cluster/,Value=(owned|shared)` tag to determine the lifetime of the resources related to the AWS cluster. - -* The `owned` tag value should be added if the resource should be destroyed as part of destroying the cluster. -* The `shared` tag value should be added if the resource continues to exist after the cluster has been destroyed. This tagging denotes that the cluster uses this resource, but there is a separate owner for the resource. - -.Procedure - -* With {op-system-base} compute machines, the {op-system-base} worker instance must be tagged with `kubernetes.io/cluster/=owned` or `kubernetes.io/cluster/=shared`. - -[NOTE] -==== -Do not tag all existing security groups with the `kubernetes.io/cluster/,Value=` tag, or the Elastic Load Balancing (ELB) will not be able to create a load balancer. -==== diff --git a/modules/sts-mode-installing-manual-run-installer.adoc b/modules/sts-mode-installing-manual-run-installer.adoc deleted file mode 100644 index e81bc3c9cc63..000000000000 --- a/modules/sts-mode-installing-manual-run-installer.adoc +++ /dev/null @@ -1,66 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/managing_cloud_provider_credentials/cco-mode-sts.adoc -// * authentication/managing_cloud_provider_credentials/cco-mode-gcp-workload-identity.adoc - -:_mod-docs-content-type: PROCEDURE -[id="sts-mode-installing-manual-run-installer_{context}"] -= Running the installer - -.Prerequisites - -* Configure an account with the cloud platform that hosts your cluster. -* Obtain the {product-title} release image. - -.Procedure - -. Change to the directory that contains the installation program and create the `install-config.yaml` file: -+ -[source,terminal] ----- -$ openshift-install create install-config --dir ----- -+ -where `` is the directory in which the installation program creates files. - -. Edit the `install-config.yaml` configuration file so that it contains the `credentialsMode` parameter set to `Manual`. -+ -.Example `install-config.yaml` configuration file -[source,yaml] ----- -apiVersion: v1 -baseDomain: cluster1.example.com -credentialsMode: Manual <1> -compute: -- architecture: amd64 - hyperthreading: Enabled ----- -<1> This line is added to set the `credentialsMode` parameter to `Manual`. - -. Create the required {product-title} installation manifests: -+ -[source,terminal] ----- -$ openshift-install create manifests ----- - -. Copy the manifests that `ccoctl` generated to the manifests directory that the installation program created: -+ -[source,terminal,subs="+quotes"] ----- -$ cp //manifests/* ./manifests/ ----- - -. Copy the `tls` directory containing the private key that the `ccoctl` generated to the installation directory: -+ -[source,terminal,subs="+quotes"] ----- -$ cp -a //tls . ----- - -. Run the {product-title} installer: -+ -[source,terminal] ----- -$ ./openshift-install create cluster ----- diff --git a/modules/update-duration-rhel-nodes.adoc b/modules/update-duration-rhel-nodes.adoc deleted file mode 100644 index c2a2a39e4629..000000000000 --- a/modules/update-duration-rhel-nodes.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * updating/understanding_updates/understanding-openshift-update-duration.adoc - -:_mod-docs-content-type: CONCEPT -[id="redhat-enterprise-linux-nodes_{context}"] -= {op-system-base-full} compute nodes - -{op-system-base-full} compute nodes require an additional usage of `openshift-ansible` to update node binary components. The actual time spent updating {op-system-base} compute nodes should not be significantly different from {op-system-first} compute nodes. diff --git a/modules/updating-troubleshooting-clear.adoc b/modules/updating-troubleshooting-clear.adoc deleted file mode 100644 index 8cbda3c63614..000000000000 --- a/modules/updating-troubleshooting-clear.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module included in the following assemblies: -// -// * updating/troubleshooting_updates/recovering-update-before-applied.adoc - -[id="updating-troubleshooting-clear_{context}"] -= Recovering when an update fails before it is applied - -If an update fails before it is applied, such as when the version that you specify cannot be found, you can cancel the update: - -[source,terminal] ----- -$ oc adm upgrade --clear ----- - -[IMPORTANT] -==== -If an update fails at any other point, you must contact Red Hat support. Rolling your cluster back to a previous version is not supported. -==== \ No newline at end of file