From 9a0ab7c9ad64acbf68eb77e707ebbf8a7cb914a4 Mon Sep 17 00:00:00 2001 From: Max Bridges Date: Wed, 15 Oct 2025 21:54:45 -0400 Subject: [PATCH] remove unused modules --- ...annotating-a-route-with-a-cookie-name.adoc | 37 -- modules/builds-output-image-digest.adoc | 32 -- ...-term-creds-component-permissions-gcp.adoc | 9 - modules/completing-installation.adoc | 33 -- modules/configmap-create.adoc | 17 - modules/configmap-overview.adoc | 62 --- modules/configuration-resource-overview.adoc | 68 ---- ...nfiguring-layer-three-routed-topology.adoc | 32 -- ...os-layering-configuring-on-extensions.adoc | 116 ------ modules/cpmso-feat-vertical-resize.adoc | 7 - modules/creating-your-first-content.adoc | 109 ----- modules/das-operator-installing-cli.adoc | 319 --------------- .../das-operator-installing-web-console.adoc | 256 ------------ modules/das-operator-installing.adoc | 10 - modules/das-operator-uninstalling-cli.adoc | 96 ----- ...das-operator-uninstalling-web-console.adoc | 51 --- modules/das-operator-uninstalling.adoc | 9 - ...ging-deploying-storage-considerations.adoc | 108 ----- modules/feature-gate-features.adoc | 34 -- ...nifest-list-through-imagestreamimport.adoc | 44 -- ...nstall-config-aws-local-zones-subnets.adoc | 35 -- ...-on-a-single-node-on-a-cloud-provider.adoc | 18 - modules/installation-about-custom.adoc | 52 --- .../installation-aws-editing-manifests.adoc | 116 ------ ...tallation-azure-finalizing-encryption.adoc | 155 ------- ...stallation-creating-worker-machineset.adoc | 144 ------- .../installation-gcp-shared-vpc-ingress.adoc | 49 --- ...supported-aws-outposts-instance-types.adoc | 27 -- ...ation-localzone-generate-k8s-manifest.adoc | 176 -------- modules/installation-osp-troubleshooting.adoc | 40 -- ...ion-requirements-user-infra-ibm-z-kvm.adoc | 195 --------- ...alling-gitops-operator-in-web-console.adoc | 18 - .../installing-gitops-operator-using-cli.adoc | 60 --- modules/machine-configs-and-pools.adoc | 75 ---- modules/machineset-osp-adding-bare-metal.adoc | 90 ----- .../metering-cluster-capacity-examples.adoc | 48 --- modules/metering-cluster-usage-examples.adoc | 27 -- ...metering-cluster-utilization-examples.adoc | 26 -- .../metering-configure-persistentvolumes.adoc | 57 --- modules/metering-debugging.adoc | 228 ----------- .../metering-exposing-the-reporting-api.adoc | 159 -------- modules/metering-install-operator.adoc | 133 ------ modules/metering-install-prerequisites.adoc | 13 - modules/metering-install-verify.adoc | 95 ----- modules/metering-overview.adoc | 33 -- modules/metering-prometheus-connection.adoc | 55 --- modules/metering-reports.adoc | 381 ------------------ modules/metering-store-data-in-azure.adoc | 57 --- modules/metering-store-data-in-gcp.adoc | 53 --- .../metering-store-data-in-s3-compatible.adoc | 48 --- modules/metering-store-data-in-s3.adoc | 136 ------- ...metering-store-data-in-shared-volumes.adoc | 150 ------- modules/metering-troubleshooting.adoc | 195 --------- modules/metering-uninstall-crds.adoc | 28 -- modules/metering-uninstall.adoc | 36 -- ...ring-use-mysql-or-postgresql-for-hive.adoc | 89 ---- modules/metering-viewing-report-results.adoc | 103 ----- modules/metering-writing-reports.adoc | 73 ---- .../minimum-ibm-z-system-requirements.adoc | 120 ------ modules/mod-docs-ocp-conventions.adoc | 154 ------- ...ulti-architecture-scheduling-overview.adoc | 13 - modules/nbde-managing-encryption-keys.adoc | 10 - modules/nw-egress-ips-automatic.adoc | 89 ---- modules/nw-egress-ips-static.adoc | 86 ---- modules/nw-egress-router-configmap.adoc | 92 ----- modules/nw-egress-router-dest-var.adoc | 107 ----- modules/nw-egress-router-dns-mode.adoc | 68 ---- modules/nw-egress-router-http-proxy-mode.adoc | 62 --- modules/nw-egress-router-pod.adoc | 231 ----------- modules/nw-egress-router-redirect-mode.adoc | 46 --- ...-integrating-route-secret-certificate.adoc | 6 - modules/nw-multinetwork-sriov.adoc | 314 --------------- modules/nw-multitenant-global.adoc | 26 -- modules/nw-multitenant-isolation.adoc | 27 -- modules/nw-multitenant-joining.adoc | 37 -- modules/nw-ne-changes-externalip-ovn.adoc | 20 - modules/nw-ne-comparing-ingress-route.adoc | 11 - modules/nw-ne-openshift-dns.adoc | 19 - modules/nw-networking-glossary-terms.adoc | 118 ------ modules/nw-networkpolicy-optimize.adoc | 22 - modules/nw-pdncc-view.adoc | 0 modules/nw-secondary-ext-gw-status.adoc | 45 --- .../nw-sriov-about-all-multi-cast_mode.adoc | 20 - modules/nw-udn-examples.adoc | 67 --- modules/patching-ovnk-address-ranges.adoc | 43 -- .../persistent-storage-csi-cloning-using.adoc | 32 -- .../preferred-ibm-z-system-requirements.adoc | 104 ----- ...oviding-direct-documentation-feedback.adoc | 24 -- modules/rbac-updating-policy-definitions.adoc | 57 --- modules/running-modified-installation.adoc | 18 - modules/service-accounts-adding-secrets.adoc | 70 ---- .../service-accounts-managing-secrets.adoc | 65 --- ...-mode-installing-manual-run-installer.adoc | 66 --- modules/understanding-installation.adoc | 8 - modules/updating-troubleshooting-clear.adoc | 18 - modules/virt-importing-vm-wizard.adoc | 150 ------- 96 files changed, 7387 deletions(-) delete mode 100644 modules/annotating-a-route-with-a-cookie-name.adoc delete mode 100644 modules/builds-output-image-digest.adoc delete mode 100644 modules/cco-short-term-creds-component-permissions-gcp.adoc delete mode 100644 modules/completing-installation.adoc delete mode 100644 modules/configmap-create.adoc delete mode 100644 modules/configmap-overview.adoc delete mode 100644 modules/configuration-resource-overview.adoc delete mode 100644 modules/configuring-layer-three-routed-topology.adoc delete mode 100644 modules/coreos-layering-configuring-on-extensions.adoc delete mode 100644 modules/cpmso-feat-vertical-resize.adoc delete mode 100644 modules/creating-your-first-content.adoc delete mode 100644 modules/das-operator-installing-cli.adoc delete mode 100644 modules/das-operator-installing-web-console.adoc delete mode 100644 modules/das-operator-installing.adoc delete mode 100644 modules/das-operator-uninstalling-cli.adoc delete mode 100644 modules/das-operator-uninstalling-web-console.adoc delete mode 100644 modules/das-operator-uninstalling.adoc delete mode 100644 modules/efk-logging-deploying-storage-considerations.adoc delete mode 100644 modules/feature-gate-features.adoc delete mode 100644 modules/importing-manifest-list-through-imagestreamimport.adoc delete mode 100644 modules/install-creating-install-config-aws-local-zones-subnets.adoc delete mode 100644 modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc delete mode 100644 modules/installation-about-custom.adoc delete mode 100644 modules/installation-aws-editing-manifests.adoc delete mode 100644 modules/installation-azure-finalizing-encryption.adoc delete mode 100644 modules/installation-creating-worker-machineset.adoc delete mode 100644 modules/installation-gcp-shared-vpc-ingress.adoc delete mode 100644 modules/installation-identify-supported-aws-outposts-instance-types.adoc delete mode 100644 modules/installation-localzone-generate-k8s-manifest.adoc delete mode 100644 modules/installation-osp-troubleshooting.adoc delete mode 100644 modules/installation-requirements-user-infra-ibm-z-kvm.adoc delete mode 100644 modules/installing-gitops-operator-in-web-console.adoc delete mode 100644 modules/installing-gitops-operator-using-cli.adoc delete mode 100644 modules/machine-configs-and-pools.adoc delete mode 100644 modules/machineset-osp-adding-bare-metal.adoc delete mode 100644 modules/metering-cluster-capacity-examples.adoc delete mode 100644 modules/metering-cluster-usage-examples.adoc delete mode 100644 modules/metering-cluster-utilization-examples.adoc delete mode 100644 modules/metering-configure-persistentvolumes.adoc delete mode 100644 modules/metering-debugging.adoc delete mode 100644 modules/metering-exposing-the-reporting-api.adoc delete mode 100644 modules/metering-install-operator.adoc delete mode 100644 modules/metering-install-prerequisites.adoc delete mode 100644 modules/metering-install-verify.adoc delete mode 100644 modules/metering-overview.adoc delete mode 100644 modules/metering-prometheus-connection.adoc delete mode 100644 modules/metering-reports.adoc delete mode 100644 modules/metering-store-data-in-azure.adoc delete mode 100644 modules/metering-store-data-in-gcp.adoc delete mode 100644 modules/metering-store-data-in-s3-compatible.adoc delete mode 100644 modules/metering-store-data-in-s3.adoc delete mode 100644 modules/metering-store-data-in-shared-volumes.adoc delete mode 100644 modules/metering-troubleshooting.adoc delete mode 100644 modules/metering-uninstall-crds.adoc delete mode 100644 modules/metering-uninstall.adoc delete mode 100644 modules/metering-use-mysql-or-postgresql-for-hive.adoc delete mode 100644 modules/metering-viewing-report-results.adoc delete mode 100644 modules/metering-writing-reports.adoc delete mode 100644 modules/minimum-ibm-z-system-requirements.adoc delete mode 100644 modules/mod-docs-ocp-conventions.adoc delete mode 100644 modules/multi-architecture-scheduling-overview.adoc delete mode 100644 modules/nbde-managing-encryption-keys.adoc delete mode 100644 modules/nw-egress-ips-automatic.adoc delete mode 100644 modules/nw-egress-ips-static.adoc delete mode 100644 modules/nw-egress-router-configmap.adoc delete mode 100644 modules/nw-egress-router-dest-var.adoc delete mode 100644 modules/nw-egress-router-dns-mode.adoc delete mode 100644 modules/nw-egress-router-http-proxy-mode.adoc delete mode 100644 modules/nw-egress-router-pod.adoc delete mode 100644 modules/nw-egress-router-redirect-mode.adoc delete mode 100644 modules/nw-ingress-integrating-route-secret-certificate.adoc delete mode 100644 modules/nw-multinetwork-sriov.adoc delete mode 100644 modules/nw-multitenant-global.adoc delete mode 100644 modules/nw-multitenant-isolation.adoc delete mode 100644 modules/nw-multitenant-joining.adoc delete mode 100644 modules/nw-ne-changes-externalip-ovn.adoc delete mode 100644 modules/nw-ne-comparing-ingress-route.adoc delete mode 100644 modules/nw-ne-openshift-dns.adoc delete mode 100644 modules/nw-networking-glossary-terms.adoc delete mode 100644 modules/nw-networkpolicy-optimize.adoc delete mode 100644 modules/nw-pdncc-view.adoc delete mode 100644 modules/nw-secondary-ext-gw-status.adoc delete mode 100644 modules/nw-sriov-about-all-multi-cast_mode.adoc delete mode 100644 modules/nw-udn-examples.adoc delete mode 100644 modules/patching-ovnk-address-ranges.adoc delete mode 100644 modules/persistent-storage-csi-cloning-using.adoc delete mode 100644 modules/preferred-ibm-z-system-requirements.adoc delete mode 100644 modules/providing-direct-documentation-feedback.adoc delete mode 100644 modules/rbac-updating-policy-definitions.adoc delete mode 100644 modules/running-modified-installation.adoc delete mode 100644 modules/service-accounts-adding-secrets.adoc delete mode 100644 modules/service-accounts-managing-secrets.adoc delete mode 100644 modules/sts-mode-installing-manual-run-installer.adoc delete mode 100644 modules/understanding-installation.adoc delete mode 100644 modules/updating-troubleshooting-clear.adoc delete mode 100644 modules/virt-importing-vm-wizard.adoc diff --git a/modules/annotating-a-route-with-a-cookie-name.adoc b/modules/annotating-a-route-with-a-cookie-name.adoc deleted file mode 100644 index 83b40c321f80..000000000000 --- a/modules/annotating-a-route-with-a-cookie-name.adoc +++ /dev/null @@ -1,37 +0,0 @@ -// Module included in the following assemblies: -// -// *using-cookies-to-keep-route-statefulness - -:_mod-docs-content-type: PROCEDURE -[id="annotating-a-route-with-a-cookie_{context}"] -= Annotating a route with a cookie - -You can set a cookie name to overwrite the default, auto-generated one for the -route. This allows the application receiving route traffic to know the cookie -name. By deleting the cookie it can force the next request to re-choose an -endpoint. So, if a server was overloaded it tries to remove the requests from the -client and redistribute them. - -.Procedure - -. Annotate the route with the desired cookie name: -+ -[source,terminal] ----- -$ oc annotate route router.openshift.io/="-" ----- -+ -For example, to annotate the cookie name of `my_cookie` to the `my_route` with -the annotation of `my_cookie_anno`: -+ -[source,terminal] ----- -$ oc annotate route my_route router.openshift.io/my_cookie="-my_cookie_anno" ----- - -. Save the cookie, and access the route: -+ -[source,terminal] ----- -$ curl $my_route -k -c /tmp/my_cookie ----- diff --git a/modules/builds-output-image-digest.adoc b/modules/builds-output-image-digest.adoc deleted file mode 100644 index c4bb8b1e0aa0..000000000000 --- a/modules/builds-output-image-digest.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * unused_topics/builds-output-image-digest - -[id="builds-output-image-digest_{context}"] -= Output image digest - -Built images can be uniquely identified by their digest, which can -later be used to pull the image by digest regardless of its current tag. - -ifdef::openshift-enterprise,openshift-webscale,openshift-origin[] -`Docker` and -endif::[] -`Source-to-Image (S2I)` builds store the digest in -`Build.status.output.to.imageDigest` after the image is pushed to a registry. -The digest is computed by the registry. Therefore, it may not always be present, -for example when the registry did not return a digest, or when the builder image -did not understand its format. - -.Built Image Digest After a Successful Push to the Registry -[source,yaml] ----- -status: - output: - to: - imageDigest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 ----- - -[role="_additional-resources"] -.Additional resources -* link:https://docs.docker.com/registry/spec/api/#/content-digests[Docker Registry HTTP API V2: digest] -* link:https://docs.docker.com/engine/reference/commandline/pull/#/pull-an-image-by-digest-immutable-identifier[`docker pull`: pull the image by digest] diff --git a/modules/cco-short-term-creds-component-permissions-gcp.adoc b/modules/cco-short-term-creds-component-permissions-gcp.adoc deleted file mode 100644 index 1248c24440dd..000000000000 --- a/modules/cco-short-term-creds-component-permissions-gcp.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/managing_cloud_provider_credentials/cco-short-term-creds.adoc - -:_mod-docs-content-type: REFERENCE -[id="cco-short-term-creds-component-permissions-gcp_{context}"] -= GCP component secret permissions requirements - -//This topic is a placeholder for when GCP role granularity can bbe documented \ No newline at end of file diff --git a/modules/completing-installation.adoc b/modules/completing-installation.adoc deleted file mode 100644 index a3d3235f7312..000000000000 --- a/modules/completing-installation.adoc +++ /dev/null @@ -1,33 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="completing-installation_{context}"] -= Completing and verifying the {product-title} installation - -When the bootstrap node is done with its work and has handed off control to the new {product-title} cluster, the bootstrap node is destroyed. The installation program waits for the cluster to initialize, creates a route to the {product-title} console, and presents the information and credentials you require to log in to the cluster. Here’s an example: - ----- -INFO Install complete!                                 - -INFO Run 'export KUBECONFIG=/home/joe/ocp/auth/kubeconfig' to manage the cluster with 'oc', the {product-title} CLI. - -INFO The cluster is ready when 'oc login -u kubeadmin -p ' succeeds (wait a few minutes). - -INFO Access the {product-title} web-console here: https://console-openshift-console.apps.mycluster.devel.example.com - -INFO Login to the console with user: kubeadmin, password: "password" ----- - -To access the {product-title} cluster from your web browser, log in as kubeadmin with the password, using the URL shown: - -     https://console-openshift-console.apps.mycluster.devel.example.com - -To access the {product-title} cluster from the command line, identify the location of the credentials file (export the KUBECONFIG variable) and log in as kubeadmin with the provided password: ----- -$ export KUBECONFIG=/home/joe/ocp/auth/kubeconfig - -$ oc login -u kubeadmin -p ----- - -At this point, you can begin using the {product-title} cluster. To understand the management of your {product-title} cluster going forward, you should explore the {product-title} control plane. diff --git a/modules/configmap-create.adoc b/modules/configmap-create.adoc deleted file mode 100644 index 5284caf4cfe0..000000000000 --- a/modules/configmap-create.adoc +++ /dev/null @@ -1,17 +0,0 @@ -// Module included in the following assemblies: -// -// * builds/setting-up-trusted-ca - -[id="configmap-create_{context}"] -= Creating a ConfigMap - -You can use the following command to create a ConfigMap from -directories, specific files, or literal values. - -.Procedure - -* Create a ConfigMap: - ----- -$ oc create configmap [options] ----- diff --git a/modules/configmap-overview.adoc b/modules/configmap-overview.adoc deleted file mode 100644 index 6c7ba1fcbbb6..000000000000 --- a/modules/configmap-overview.adoc +++ /dev/null @@ -1,62 +0,0 @@ -// Module included in the following assemblies: -// -// * builds/setting-up-trusted-ca - -[id="configmap-overview_{context}"] -= Understanding ConfigMaps - -Many applications require configuration using some combination of configuration -files, command line arguments, and environment variables. These configuration -artifacts are decoupled from image content in order to keep containerized -applications portable. - -The ConfigMap object provides mechanisms to inject containers with -configuration data while keeping containers agnostic of {product-title}. A -ConfigMap can be used to store fine-grained information like individual -properties or coarse-grained information like entire configuration files or JSON -blobs. - -The ConfigMap API object holds key-value pairs of configuration data that -can be consumed in pods or used to store configuration data for system -components such as controllers. ConfigMap is similar to secrets, but -designed to more conveniently support working with strings that do not contain -sensitive information. For example: - -.ConfigMap Object Definition -[source,yaml] ----- -kind: ConfigMap -apiVersion: v1 -metadata: - creationTimestamp: 2016-02-18T19:14:38Z - name: example-config - namespace: default -data: <1> - example.property.1: hello - example.property.2: world - example.property.file: |- - property.1=value-1 - property.2=value-2 - property.3=value-3 -binaryData: - bar: L3Jvb3QvMTAw <2> ----- -<1> Contains the configuration data. -<2> Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. -Enter the file data in Base 64. - -[NOTE] -==== -You can use the `binaryData` field when you create a ConfigMap from a binary -file, such as an image. -==== - -Configuration data can be consumed in pods in a variety of ways. A ConfigMap -can be used to: - -1. Populate the value of environment variables. -2. Set command-line arguments in a container. -3. Populate configuration files in a volume. - -Both users and system components can store configuration data in a -ConfigMap. diff --git a/modules/configuration-resource-overview.adoc b/modules/configuration-resource-overview.adoc deleted file mode 100644 index ceef1e265043..000000000000 --- a/modules/configuration-resource-overview.adoc +++ /dev/null @@ -1,68 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="configuration-resource-overview_{context}"] -= About Configuration Resources in {product-title} - -You perform many customization and configuration tasks after you deploy your -cluster, including configuring networking and setting your identity provider. - -In {product-title}, you modify Configuration Resources to determine the behavior -of these integrations. The Configuration Resources are controlled by Operators -that are managed by the Cluster Version Operator, which manages all of the -Operators that run your cluster's control plane. - -You can customize the following Configuration Resources: - -[cols="3a,8a",options="header"] -|=== - -|Configuration Resource |Description -|Authentication -| - -|DNS -| - -|Samples -| * *ManagementState:* -** *Managed.* The operator updates the samples as the configuration dictates. -** *Unmanaged.* The operator ignores updates to the samples resource object and -any imagestreams or templates in the `openshift` namespace. -** *Removed.* The operator removes the set of managed imagestreams -and templates in the `openshift` namespace. It ignores new samples created by -the cluster administrator or any samples in the skipped lists. After the removals are -complete, the operator works like it is in the `Unmanaged` state and ignores -any watch events on the sample resources, imagestreams, or templates. It -operates on secrets to facilitate the CENTOS to RHEL switch. There are some -caveats around concurrent create and removal. -* *Samples Registry:* Overrides the registry from which images are imported. -* *Architecture:* Place holder to choose an architecture type. Currently only x86 -is supported. -* *Skipped Imagestreams:* Imagestreams that are in the operator's -inventory, but that the cluster administrator wants the operator to ignore or not manage. -* *Skipped Templates:* Templates that are in the operator's inventory, but that -the cluster administrator wants the operator to ignore or not manage. - -|Infrastructure -| - -|Ingress -| - -|Network -| - -|OAuth -| - -|=== - -While you can complete many other customizations and configure other integrations -with an {product-title} cluster, configuring these resources is a common first -step after you deploy a cluster. - -Like all Operators, the Configuration Resources are governed by -Custom Resource Definitions (CRD). You customize the CRD for each -Configuration Resource that you want to modify in your cluster. diff --git a/modules/configuring-layer-three-routed-topology.adoc b/modules/configuring-layer-three-routed-topology.adoc deleted file mode 100644 index 0b614d892a69..000000000000 --- a/modules/configuring-layer-three-routed-topology.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/multiple_networks/configuring-additional-network.adoc - -:_mod-docs-content-type: CONCEPT -[id="configuration-layer-three-routed-topology_{context}"] -= Configuration for a routed topology - -The routed (layer 3) topology networks are a simplified topology for the cluster default network without egress or ingress. In this topology, there is one logical switch per node, each with a different subnet, and a router interconnecting all logical switches. - -This configuration can be used for IPv6 and dual-stack deployments. - -[NOTE] -==== -* Layer 3 routed topology networks only allow for the transfer of data packets between pods within a cluster. -* Creating a secondary network with an IPv6 subnet or dual-stack subnets fails on a single-stack {product-title} cluster. This is a known limitation and will be fixed a future version of {product-title}. -==== - -The following `NetworkAttachmentDefinition` custom resource definition (CRD) YAML describes the fields needed to configure a routed secondary network. - -[source,yaml] ----- - { - "cniVersion": "0.3.1", - "name": "ns1-l3-network", - "type": "ovn-k8s-cni-overlay", - "topology":"layer3", - "subnets": "10.128.0.0/16/24", - "mtu": 1300, - "netAttachDefName": "ns1/l3-network" - } ----- \ No newline at end of file diff --git a/modules/coreos-layering-configuring-on-extensions.adoc b/modules/coreos-layering-configuring-on-extensions.adoc deleted file mode 100644 index 41fab05636e7..000000000000 --- a/modules/coreos-layering-configuring-on-extensions.adoc +++ /dev/null @@ -1,116 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_configuration/coreos-layering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="coreos-layering-configuring-on-extensions_{context}"] -= Installing extensions into an on-cluster custom layered image - -You can install {op-system-first} extensions into your on-cluster custom layered image by creating a machine config that lists the extensions that you want to install. The Machine Config Operator (MCO) installs the extensions onto the nodes associated with a specific machine config pool (MCP). - -For a list of the currently supported extensions, see "Adding extensions to RHCOS." - -After you make the change, the MCO reboots the nodes associated with the specified machine config pool. - -[NOTE] -==== -include::snippets/coreos-layering-configuring-on-pause.adoc[] -==== - -.Prerequisites - -* You have opted in to on-cluster layering by creating a `MachineOSConfig` object. - -.Procedure - -. Create a YAML file for the machine config similar to the following example: -+ -[source,yaml] ----- -apiVersion: machineconfiguration.openshift.io/v1 <1> -kind: MachineConfig -metadata: - labels: - machineconfiguration.openshift.io/role: worker <2> - name: 80-worker-extensions -spec: - config: - ignition: - version: 3.2.0 - extensions: <3> - - usbguard - - kerberos ----- -<1> Specifies the `machineconfiguration.openshift.io/v1` API that is required for `MachineConfig` CRs. -<2> Specifies the machine config pool to apply the `MachineConfig` object to. -<3> Lists the {op-system-first} extensions that you want to install. - -. Create the MCP object: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- - -.Verification - -. You can watch the build progress by using the following command: -+ -[source,terminal] ----- -$ oc get machineosbuilds ----- -+ -.Example output -[source,terminal] ----- -NAME PREPARED BUILDING SUCCEEDED INTERRUPTED FAILED -layered-f8ab2d03a2f87a2acd449177ceda805d False True False False False <1> ----- -<1> The value `True` in the `BUILDING` column indicates that the `MachineOSBuild` object is building. When the `SUCCEEDED` column reports `TRUE`, the build is complete. - -. You can watch as the new machine config is rolled out to the nodes by using the following command: -+ -[source,terminal] ----- -$ oc get machineconfigpools ----- -+ -.Example output -[source,terminal] ----- -NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE -master rendered-master-a0b404d061a6183cc36d302363422aba True False False 3 3 3 0 3h38m -worker rendered-worker-221507009cbcdec0eec8ab3ccd789d18 False True False 2 2 2 0 3h38m <1> ----- -<1> The value `FALSE` in the `UPDATED` column indicates that the `MachineOSBuild` object is building. When the `UPDATED` column reports `FALSE`, the new custom layered image has rolled out to the nodes. - -. When the associated machine config pool is updated, check that the extensions were installed: - -.. Open an `oc debug` session to the node by running the following command: -+ -[source,terminal] ----- -$ oc debug node/ ----- - -.. Set `/host` as the root directory within the debug shell by running the following command: -+ -[source,terminal] ----- -sh-5.1# chroot /host ----- - -.. Use an appropriate command to verify that the extensions were installed. The following example shows that the usbguard extension was installed: -+ -[source,terminal] ----- -sh-5.1# rpm -qa |grep usbguard ----- -+ -.Example output -[source,terminal] ----- -usbguard-selinux-1.0.0-15.el9.noarch -usbguard-1.0.0-15.el9.x86_64 ----- diff --git a/modules/cpmso-feat-vertical-resize.adoc b/modules/cpmso-feat-vertical-resize.adoc deleted file mode 100644 index c0f3ace4e601..000000000000 --- a/modules/cpmso-feat-vertical-resize.adoc +++ /dev/null @@ -1,7 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/cpmso-about.adoc - -:_mod-docs-content-type: CONCEPT -[id="cpmso-feat-vertical-resize_{context}"] -= Vertical resizing of the control plane \ No newline at end of file diff --git a/modules/creating-your-first-content.adoc b/modules/creating-your-first-content.adoc deleted file mode 100644 index 8eee346317ba..000000000000 --- a/modules/creating-your-first-content.adoc +++ /dev/null @@ -1,109 +0,0 @@ -// Module included in the following assemblies: -// -// assembly_getting-started-modular-docs-ocp.adoc - -// Base the file name and the ID on the module title. For example: -// * file name: doing-procedure-a.adoc -// * ID: [id="doing-procedure-a"] -// * Title: = Doing procedure A - -:_mod-docs-content-type: PROCEDURE -[id="creating-your-first-content_{context}"] -= Creating your first content - -In this procedure, you will create your first example content using modular -docs for the OpenShift docs repository. - -.Prerequisites - -* You have forked and then cloned the OpenShift docs repository locally. -* You have downloaded and are using Atom text editor for creating content. -* You have installed AsciiBinder (the build tool for OpenShift docs). - -.Procedure - -. Navigate to your locally cloned OpenShift docs repository on a command line. - -. Create a new feature branch: - -+ ----- -git checkout master -git checkout -b my_first_mod_docs ----- -+ -. If there is no `modules` directory in the root folder, create one. - -. In this `modules` directory, create a file called `my-first-module.adoc`. - -. Open this newly created file in Atom and copy into this file the contents from -the link:https://raw.githubusercontent.com/redhat-documentation/modular-docs/master/modular-docs-manual/files/TEMPLATE_PROCEDURE_doing-one-procedure.adoc[procedure template] -from Modular docs repository. - -. Replace the content in this file with some example text using the guidelines -in the comments. Give this module the title `My First Module`. Save this file. -You have just created your first module. - -. Create a new directory from the root of your OpenShift docs repository and -call it `my_guide`. - -. In this my_guide directory, create a new file called -`assembly_my-first-assembly.adoc`. - -. Open this newly created file in Atom and copy into this file the contents from -the link:https://raw.githubusercontent.com/redhat-documentation/modular-docs/master/modular-docs-manual/files/TEMPLATE_ASSEMBLY_a-collection-of-modules.adoc[assembly template] -from Modular docs repository. - -. Replace the content in this file with some example text using the guidelines -in the comments. Give this assembly the title: `My First Assembly`. - -. Before the first anchor id in this assembly file, add a `:context:` attribute: - -+ -`:context: assembly-first-content` - -. After the Prerequisites section, add the module created earlier (the following is -deliberately spelled incorrectly to pass validation. Use 'include' instead of 'ilude'): - -+ -`ilude::modules/my-first-module.adoc[leveloffset=+1]` - -+ -Remove the other includes that are present in this file. Save this file. - -. Open up `my-first-module.adoc` in the `modules` folder. At the top of -this file, in the comments section, add the following to indicate in which -assembly this module is being used: - -+ ----- -// Module included in the following assemblies: -// -// my_guide/assembly_my-first-assembly.adoc ----- - -. Open up `_topic_map.yml` from the root folder and add these lines at the end -of this file and then save. - -+ ----- ---- -Name: OpenShift CCS Mod Docs First Guide -Dir: my_guide -Distros: openshift-* -Topics: -- Name: My First Assembly - File: assembly_my-first-assembly ----- - -. On the command line, run `asciibinder` from the root folder of openshift-docs. -You do not have to add or commit your changes for asciibinder to run. - -. After the asciibinder build completes, open up your browser and navigate to -/openshift-docs/_preview/openshift-enterprise/my_first_mod_docs/my_guide/assembly_my-first-assembly.html - -. Confirm that your book `my_guide` has an assembly `My First Assembly` with the -contents from your module `My First Module`. - -NOTE: You can delete this branch now if you are done testing. This branch -should not be submitted to the upstream openshift-docs repository. diff --git a/modules/das-operator-installing-cli.adoc b/modules/das-operator-installing-cli.adoc deleted file mode 100644 index 6d3e4e266050..000000000000 --- a/modules/das-operator-installing-cli.adoc +++ /dev/null @@ -1,319 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/user/das-dynamic-accelerator-slicer-operator.adoc - -:_mod-docs-content-type: PROCEDURE -[id="das-operator-installing-cli_{context}"] -= Installing the Dynamic Accelerator Slicer Operator using the CLI - -As a cluster administrator, you can install the Dynamic Accelerator Slicer (DAS) Operator using the OpenShift CLI. - -.Prerequisites - -* You have access to an {product-title} cluster using an account with `cluster-admin` permissions. -* You have installed the OpenShift CLI (`oc`). -* You have installed the required prerequisites: -** cert-manager Operator for Red Hat OpenShift -** Node Feature Discovery (NFD) Operator -** NVIDIA GPU Operator -** NodeFeatureDiscovery CR - -.Procedure - -. Configure the NVIDIA GPU Operator for MIG support: - -.. Apply the following cluster policy to disable the default NVIDIA device plugin and enable MIG support. Create a file named `gpu-cluster-policy.yaml` with the following content: -+ -[source,yaml] ----- -apiVersion: nvidia.com/v1 -kind: ClusterPolicy -metadata: - name: gpu-cluster-policy -spec: - daemonsets: - rollingUpdate: - maxUnavailable: "1" - updateStrategy: RollingUpdate - dcgm: - enabled: true - dcgmExporter: - config: - name: "" - enabled: true - serviceMonitor: - enabled: true - devicePlugin: - config: - default: "" - name: "" - enabled: false - mps: - root: /run/nvidia/mps - driver: - certConfig: - name: "" - enabled: true - kernelModuleConfig: - name: "" - licensingConfig: - configMapName: "" - nlsEnabled: true - repoConfig: - configMapName: "" - upgradePolicy: - autoUpgrade: true - drain: - deleteEmptyDir: false - enable: false - force: false - timeoutSeconds: 300 - maxParallelUpgrades: 1 - maxUnavailable: 25% - podDeletion: - deleteEmptyDir: false - force: false - timeoutSeconds: 300 - waitForCompletion: - timeoutSeconds: 0 - useNvidiaDriverCRD: false - useOpenKernelModules: false - virtualTopology: - config: "" - gdrcopy: - enabled: false - gds: - enabled: false - gfd: - enabled: true - mig: - strategy: mixed - migManager: - config: - default: "" - name: default-mig-parted-config - enabled: true - env: - - name: WITH_REBOOT - value: 'true' - - name: MIG_PARTED_MODE_CHANGE_ONLY - value: 'true' - nodeStatusExporter: - enabled: true - operator: - defaultRuntime: crio - initContainer: {} - runtimeClass: nvidia - use_ocp_driver_toolkit: true - sandboxDevicePlugin: - enabled: true - sandboxWorkloads: - defaultWorkload: container - enabled: false - toolkit: - enabled: true - installDir: /usr/local/nvidia - validator: - plugin: - env: - - name: WITH_WORKLOAD - value: "false" - cuda: - env: - - name: WITH_WORKLOAD - value: "false" - vfioManager: - enabled: true - vgpuDeviceManager: - enabled: true - vgpuManager: - enabled: false ----- - -.. Apply the cluster policy by running the following command: -+ -[source,terminal] ----- -$ oc apply -f gpu-cluster-policy.yaml ----- - -.. Verify the NVIDIA GPU Operator cluster policy reaches the `Ready` state by running the following command: -+ -[source,terminal] ----- -$ oc get clusterpolicies.nvidia.com gpu-cluster-policy -w ----- -+ -Wait until the `STATUS` column shows `ready`. -+ -.Example output -+ -[source,terminal] ----- -NAME STATUS AGE -gpu-cluster-policy ready 2025-08-14T08:56:45Z ----- - -.. Verify that all pods in the NVIDIA GPU Operator namespace are running by running the following command: -+ -[source,terminal] ----- -$ oc get pods -n nvidia-gpu-operator ----- -+ -All pods should show a `Running` or `Completed` status. - -.. Label nodes with MIG-capable GPUs to enable MIG mode by running the following command: -+ -[source,terminal] ----- -$ oc label node $NODE_NAME nvidia.com/mig.config=all-enabled --overwrite ----- -+ -Replace `$NODE_NAME` with the name of each node that has MIG-capable GPUs. -+ -[IMPORTANT] -==== -After applying the MIG label, the labeled nodes reboot to enable MIG mode. Wait for the nodes to come back online before proceeding. -==== - -.. Verify that the nodes have successfully enabled MIG mode by running the following command: -+ -[source,terminal] ----- -$ oc get nodes -l nvidia.com/mig.config=all-enabled ----- - -. Create a namespace for the DAS Operator: - -.. Create the following `Namespace` custom resource (CR) that defines the `das-operator` namespace, and save the YAML in the `das-namespace.yaml` file: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Namespace -metadata: - name: das-operator - labels: - name: das-operator - openshift.io/cluster-monitoring: "true" ----- - -.. Create the namespace by running the following command: -+ -[source,terminal] ----- -$ oc create -f das-namespace.yaml ----- - -. Install the DAS Operator in the namespace you created in the previous step by creating the following objects: - -.. Create the following `OperatorGroup` CR and save the YAML in the `das-operatorgroup.yaml` file: -+ -[source,yaml] ----- -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - generateName: das-operator- - name: das-operator - namespace: das-operator ----- - -.. Create the `OperatorGroup` CR by running the following command: -+ -[source,terminal] ----- -$ oc create -f das-operatorgroup.yaml ----- - -.. Create the following `Subscription` CR and save the YAML in the `das-sub.yaml` file: -+ -.Example Subscription -[source,yaml] ----- -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: das-operator - namespace: das-operator -spec: - channel: "stable" - installPlanApproval: Automatic - name: das-operator - source: redhat-operators - sourceNamespace: openshift-marketplace ----- - -.. Create the subscription object by running the following command: -+ -[source,terminal] ----- -$ oc create -f das-sub.yaml ----- - -.. Change to the `das-operator` project: -+ -[source,terminal] ----- -$ oc project das-operator ----- - -.. Create the following `DASOperator` CR and save the YAML in the `das-dasoperator.yaml` file: -+ -.Example `DASOperator` CR -[source,yaml] ----- -apiVersion: inference.redhat.com/v1alpha1 -kind: DASOperator -metadata: - name: cluster <1> - namespace: das-operator -spec: - managementState: Managed - logLevel: Normal - operatorLogLevel: Normal ----- -<1> The name of the `DASOperator` CR must be `cluster`. - -.. Create the `dasoperator` CR by running the following command: -+ -[source,terminal] ----- -oc create -f das-dasoperator.yaml ----- - -.Verification - -* Verify that the Operator deployment is successful by running the following command: -+ -[source,terminal] ----- -$ oc get pods ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -das-daemonset-6rsfd 1/1 Running 0 5m16s -das-daemonset-8qzgf 1/1 Running 0 5m16s -das-operator-5946478b47-cjfcp 1/1 Running 0 5m18s -das-operator-5946478b47-npwmn 1/1 Running 0 5m18s -das-operator-webhook-59949d4f85-5n9qt 1/1 Running 0 68s -das-operator-webhook-59949d4f85-nbtdl 1/1 Running 0 68s -das-scheduler-6cc59dbf96-4r85f 1/1 Running 0 68s -das-scheduler-6cc59dbf96-bf6ml 1/1 Running 0 68s ----- -+ -A successful deployment shows all pods with a `Running` status. The deployment includes: -+ -das-operator:: Main Operator controller pods -das-operator-webhook:: Webhook server pods for mutating pod requests -das-scheduler:: Scheduler plugin pods for MIG slice allocation -das-daemonset:: Daemonset pods that run only on nodes with MIG-compatible GPUs -+ -[NOTE] -==== -The `das-daemonset` pods only appear on nodes that have MIG-compatible GPU hardware. If you do not see any daemonset pods, verify that your cluster has nodes with supported GPU hardware and that the NVIDIA GPU Operator is properly configured. -==== \ No newline at end of file diff --git a/modules/das-operator-installing-web-console.adoc b/modules/das-operator-installing-web-console.adoc deleted file mode 100644 index 8af6c610703b..000000000000 --- a/modules/das-operator-installing-web-console.adoc +++ /dev/null @@ -1,256 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/user/das-dynamic-accelerator-slicer-operator.adoc - -:_mod-docs-content-type: PROCEDURE -[id="das-operator-installing-web-console_{context}"] -= Installing the Dynamic Accelerator Slicer Operator using the web console - -As a cluster administrator, you can install the Dynamic Accelerator Slicer (DAS) Operator using the {product-title} web console. - -.Prerequisites - -* You have access to an {product-title} cluster using an account with `cluster-admin` permissions. -* You have installed the required prerequisites: -** cert-manager Operator for Red Hat OpenShift -** Node Feature Discovery (NFD) Operator -** NVIDIA GPU Operator -** NodeFeatureDiscovery CR - -.Procedure - -. Configure the NVIDIA GPU Operator for MIG support: - -.. In the {product-title} web console, navigate to *Operators* -> *Installed Operators*. - -.. Select the *NVIDIA GPU Operator* from the list of installed operators. - -.. Click the *ClusterPolicy* tab and then click *Create ClusterPolicy*. - -.. In the YAML editor, replace the default content with the following cluster policy configuration to disable the default NVIDIA device plugin and enable MIG support: -+ -[source,yaml] ----- -apiVersion: nvidia.com/v1 -kind: ClusterPolicy -metadata: - name: gpu-cluster-policy -spec: - daemonsets: - rollingUpdate: - maxUnavailable: "1" - updateStrategy: RollingUpdate - dcgm: - enabled: true - dcgmExporter: - config: - name: "" - enabled: true - serviceMonitor: - enabled: true - devicePlugin: - config: - default: "" - name: "" - enabled: false - mps: - root: /run/nvidia/mps - driver: - certConfig: - name: "" - enabled: true - kernelModuleConfig: - name: "" - licensingConfig: - configMapName: "" - nlsEnabled: true - repoConfig: - configMapName: "" - upgradePolicy: - autoUpgrade: true - drain: - deleteEmptyDir: false - enable: false - force: false - timeoutSeconds: 300 - maxParallelUpgrades: 1 - maxUnavailable: 25% - podDeletion: - deleteEmptyDir: false - force: false - timeoutSeconds: 300 - waitForCompletion: - timeoutSeconds: 0 - useNvidiaDriverCRD: false - useOpenKernelModules: false - virtualTopology: - config: "" - gdrcopy: - enabled: false - gds: - enabled: false - gfd: - enabled: true - mig: - strategy: mixed - migManager: - config: - default: "" - name: default-mig-parted-config - enabled: true - env: - - name: WITH_REBOOT - value: 'true' - - name: MIG_PARTED_MODE_CHANGE_ONLY - value: 'true' - nodeStatusExporter: - enabled: true - operator: - defaultRuntime: crio - initContainer: {} - runtimeClass: nvidia - use_ocp_driver_toolkit: true - sandboxDevicePlugin: - enabled: true - sandboxWorkloads: - defaultWorkload: container - enabled: false - toolkit: - enabled: true - installDir: /usr/local/nvidia - validator: - plugin: - env: - - name: WITH_WORKLOAD - value: "false" - cuda: - env: - - name: WITH_WORKLOAD - value: "false" - vfioManager: - enabled: true - vgpuDeviceManager: - enabled: true - vgpuManager: - enabled: false ----- - -.. Click *Create* to apply the cluster policy. - -.. Navigate to *Workloads* -> *Pods* and select the `nvidia-gpu-operator` namespace to monitor the cluster policy deployment. - -.. Wait for the NVIDIA GPU Operator cluster policy to reach the `Ready` state. You can monitor this by: -+ -... Navigating to *Operators* -> *Installed Operators* -> *NVIDIA GPU Operator*. -... Clicking the *ClusterPolicy* tab and checking that the status shows `ready`. - -.. Verify that all pods in the NVIDIA GPU Operator namespace are running by selecting the `nvidia-gpu-operator` namespace and navigating to *Workloads* -> *Pods*. - -.. Label nodes with MIG-capable GPUs to enable MIG mode: -+ -... Navigate to *Compute* -> *Nodes*. -... Select a node that has MIG-capable GPUs. -... Click *Actions* -> *Edit Labels*. -... Add the label `nvidia.com/mig.config=all-enabled`. -... Click *Save*. -... Repeat for each node with MIG-capable GPUs. -+ -[IMPORTANT] -==== -After applying the MIG label, the labeled nodes will reboot to enable MIG mode. Wait for the nodes to come back online before proceeding. -==== - -.. Verify that MIG mode is successfully enabled on the GPU nodes by checking that the `nvidia.com/mig.config=all-enabled` label appears in the *Labels* section. To locate the label, navigate to *Compute → Nodes*, select the GPU node, and click the *Details* tab. - -. In the {product-title} web console, click *Operators* -> *OperatorHub*. - -. Search for *Dynamic Accelerator Slicer* or *DAS* in the filter box to locate the DAS Operator. - -. Select the *Dynamic Accelerator Slicer* and click *Install*. - -. On the *Install Operator* page: -.. Select *All namespaces on the cluster (default)* for the installation mode. -.. Select *Installed Namespace* -> *Operator recommended Namespace: Project das-operator*. -.. If creating a new namespace, enter `das-operator` as the namespace name. -.. Select an update channel. -.. Select *Automatic* or *Manual* for the approval strategy. - -. Click *Install*. - -. In the {product-title} web console, click *Operators* -> *Installed Operators*. - -. Select *DAS Operator* from the list. - -. In the *Provided APIs* table column, click *DASOperator*. This takes you to the *DASOperator* tab of the *Operator details* page. - -. Click *Create DASOperator*. This takes you to the *Create DASOperator* YAML view. - -. In the YAML editor, paste the following example: -+ -.Example `DASOperator` CR -[source,yaml] ----- -apiVersion: inference.redhat.com/v1alpha1 -kind: DASOperator -metadata: - name: cluster <1> - namespace: das-operator -spec: - logLevel: Normal - operatorLogLevel: Normal - managementState: Managed ----- -<1> The name of the `DASOperator` CR must be `cluster`. - -. Click *Create*. - -.Verification - -To verify that the DAS Operator installed successfully: - -. Navigate to the *Operators* -> *Installed Operators* page. -. Ensure that *Dynamic Accelerator Slicer* is listed in the `das-operator` namespace with a *Status* of *Succeeded*. - -To verify that the `DASOperator` CR installed successfully: - -* After you create the `DASOperator` CR, the web console brings you to the *DASOperator list view*. The *Status* field of the CR changes to *Available* when all of the components are running. - -* Optional. You can verify that the `DASOperator` CR installed successfully by running the following command in the OpenShift CLI: -+ -[source,terminal] ----- -$ oc get dasoperator -n das-operator ----- -+ -.Example output -+ -[source,terminal] ----- -NAME STATUS AGE -cluster Available 3m ----- - -[NOTE] -==== -During installation an Operator might display a *Failed* status. If the installation later succeeds with an *Succeeded* message, you can ignore the *Failed* message. -==== - -You can also verify the installation by checking the pods: - -. Navigate to the *Workloads* -> *Pods* page and select the `das-operator` namespace. -. Verify that all DAS Operator component pods are running: -** `das-operator` pods (main operator controllers) -** `das-operator-webhook` pods (webhook servers) -** `das-scheduler` pods (scheduler plugins) -** `das-daemonset` pods (only on nodes with MIG-compatible GPUs) - -[NOTE] -==== -The `das-daemonset` pods will only appear on nodes that have MIG-compatible GPU hardware. If you do not see any daemonset pods, verify that your cluster has nodes with supported GPU hardware and that the NVIDIA GPU Operator is properly configured. -==== - -.Troubleshooting -Use the following procedure if the Operator does not appear to be installed: - -. Navigate to the *Operators* -> *Installed Operators* page and inspect the *Operator Subscriptions* and *Install Plans* tabs for any failure or errors under *Status*. -. Navigate to the *Workloads* -> *Pods* page and check the logs for pods in the `das-operator` namespace. \ No newline at end of file diff --git a/modules/das-operator-installing.adoc b/modules/das-operator-installing.adoc deleted file mode 100644 index dfc6c3b9e125..000000000000 --- a/modules/das-operator-installing.adoc +++ /dev/null @@ -1,10 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/user/das-dynamic-accelerator-slicer-operator.adoc - -:_mod-docs-content-type: CONCEPT -[id="das-operator-installing_{context}"] -= Installing the Dynamic Accelerator Slicer Operator - -As a cluster administrator, you can install the Dynamic Accelerator Slicer (DAS) Operator by using the {product-title} web console or the OpenShift CLI. - diff --git a/modules/das-operator-uninstalling-cli.adoc b/modules/das-operator-uninstalling-cli.adoc deleted file mode 100644 index a9e928bfbcbf..000000000000 --- a/modules/das-operator-uninstalling-cli.adoc +++ /dev/null @@ -1,96 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/user/das-dynamic-accelerator-slicer-operator.adoc -// -:_mod-docs-content-type: PROCEDURE -[id="das-operator-uninstalling-cli_{context}"] -= Uninstalling the Dynamic Accelerator Slicer Operator using the CLI - -You can uninstall the Dynamic Accelerator Slicer (DAS) Operator using the OpenShift CLI. - -.Prerequisites - -* You have access to an {product-title} cluster using an account with `cluster-admin` permissions. -* You have installed the OpenShift CLI (`oc`). -* The DAS Operator is installed in your cluster. - -.Procedure - -. List the installed operators to find the DAS Operator subscription by running the following command: -+ -[source,terminal] ----- -$ oc get subscriptions -n das-operator ----- -+ -.Example output -[source,terminal] ----- -NAME PACKAGE SOURCE CHANNEL -das-operator das-operator redhat-operators stable ----- - -. Delete the subscription by running the following command: -+ -[source,terminal] ----- -$ oc delete subscription das-operator -n das-operator ----- - -. List and delete the cluster service version (CSV) by running the following commands: -+ -[source,terminal] ----- -$ oc get csv -n das-operator ----- -+ -[source,terminal] ----- -$ oc delete csv -n das-operator ----- - -. Remove the operator group by running the following command: -+ -[source,terminal] ----- -$ oc delete operatorgroup das-operator -n das-operator ----- - -. Delete any remaining `AllocationClaim` resources by running the following command: -+ -[source,terminal] ----- -$ oc delete allocationclaims --all -n das-operator ----- - -. Remove the DAS Operator namespace by running the following command: -+ -[source,terminal] ----- -$ oc delete namespace das-operator ----- - -.Verification - -. Verify that the DAS Operator resources have been removed by running the following command: -+ -[source,terminal] ----- -$ oc get namespace das-operator ----- -+ -The command should return an error indicating that the namespace is not found. - -. Verify that no `AllocationClaim` custom resource definitions remain by running the following command: -+ -[source,terminal] ----- -$ oc get crd | grep allocationclaim ----- -+ -The command should return an error indicating that no custom resource definitions are found. - -[WARNING] -==== -Uninstalling the DAS Operator removes all GPU slice allocations and might cause running workloads that depend on GPU slices to fail. Ensure that no critical workloads are using GPU slices before proceeding with the uninstallation. -==== \ No newline at end of file diff --git a/modules/das-operator-uninstalling-web-console.adoc b/modules/das-operator-uninstalling-web-console.adoc deleted file mode 100644 index f9d49eb835a8..000000000000 --- a/modules/das-operator-uninstalling-web-console.adoc +++ /dev/null @@ -1,51 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/user/das-dynamic-accelerator-slicer-operator.adoc -// -:_mod-docs-content-type: PROCEDURE -[id="das-operator-uninstalling-web-console_{context}"] -= Uninstalling the Dynamic Accelerator Slicer Operator using the web console - -You can uninstall the Dynamic Accelerator Slicer (DAS) Operator using the {product-title} web console. - -.Prerequisites - -* You have access to an {product-title} cluster using an account with `cluster-admin` permissions. -* The DAS Operator is installed in your cluster. - -.Procedure - -. In the {product-title} web console, navigate to *Operators* -> *Installed Operators*. - -. Locate the *Dynamic Accelerator Slicer* in the list of installed Operators. - -. Click the *Options* menu {kebab} for the DAS Operator and select *Uninstall Operator*. - -. In the confirmation dialog, click *Uninstall* to confirm the removal. - -. Navigate to *Home* -> *Projects*. - -. Search for *das-operator* in the search box to locate the DAS Operator project. - -. Click the *Options* menu {kebab} next to the das-operator project, and select *Delete Project*. - -. In the confirmation dialog, type `das-operator` in the dialog box, and click *Delete* to confirm the deletion. - - -.Verification - -. Navigate to the *Operators* -> *Installed Operators* page. -. Verify that the Dynamic Accelerator Slicer (DAS) Operator is no longer listed. -. Optional. Verify that the `das-operator` namespace and its resources have been removed by running the following command: -+ -[source,terminal] ----- -$ oc get namespace das-operator ----- -+ -The command should return an error indicating that the namespace is not found. - -[WARNING] -==== -Uninstalling the DAS Operator removes all GPU slice allocations and might cause running workloads that depend on GPU slices to fail. Ensure that no critical workloads are using GPU slices before proceeding with the uninstallation. -==== \ No newline at end of file diff --git a/modules/das-operator-uninstalling.adoc b/modules/das-operator-uninstalling.adoc deleted file mode 100644 index adf96e976d7d..000000000000 --- a/modules/das-operator-uninstalling.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * operators/user/das-dynamic-accelerator-slicer-operator.adoc -// -:_mod-docs-content-type: CONCEPT -[id="das-operator-uninstalling_{context}"] -= Uninstalling the Dynamic Accelerator Slicer Operator - -Use one of the following procedures to uninstall the Dynamic Accelerator Slicer (DAS) Operator, depending on how the Operator was installed. \ No newline at end of file diff --git a/modules/efk-logging-deploying-storage-considerations.adoc b/modules/efk-logging-deploying-storage-considerations.adoc deleted file mode 100644 index 6ee41b33d064..000000000000 --- a/modules/efk-logging-deploying-storage-considerations.adoc +++ /dev/null @@ -1,108 +0,0 @@ -// Module included in the following assemblies: -// -// * logging/efk-logging-deploy.adoc - -[id="efk-logging-deploy-storage-considerations_{context}"] -= Storage considerations for cluster logging and {product-title} - -An Elasticsearch index is a collection of primary shards and its corresponding replica -shards. This is how ES implements high availability internally, therefore there -is little requirement to use hardware based mirroring RAID variants. RAID 0 can still -be used to increase overall disk performance. - -//Following paragraph also in nodes/efk-logging-elasticsearch - -Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and CPU limits -unless you specify otherwise the Cluster Logging Custom Resource. The initial set of {product-title} nodes might not be large enough -to support the Elasticsearch cluster. You must add additional nodes to the {product-title} cluster to run with the recommended -or higher memory. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. - -//// -Each Elasticsearch data node requires its own individual storage, but an {product-title} deployment -can only provide volumes shared by all of its pods, which again means that -Elasticsearch clusters should not be implemented with a single deployment. -//// - -A persistent volume is required for each Elasticsearch deployment to have one data volume per data node. On {product-title} this is achieved using -Persistent Volume Claims. - -The Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Refer to -Persistent Elasticsearch Storage for more details. - -Below are capacity planning guidelines for {product-title} aggregate logging. - -*Example scenario* - -Assumptions: - -. Which application: Apache -. Bytes per line: 256 -. Lines per second load on application: 1 -. Raw text data -> JSON - -Baseline (256 characters per minute -> 15KB/min) - -[cols="3,4",options="header"] -|=== -|Logging Pods -|Storage Throughput - -|3 es -1 kibana -1 curator -1 fluentd -| 6 pods total: 90000 x 86400 = 7,7 GB/day - -|3 es -1 kibana -1 curator -11 fluentd -| 16 pods total: 225000 x 86400 = 24,0 GB/day - -|3 es -1 kibana -1 curator -20 fluentd -|25 pods total: 225000 x 86400 = 32,4 GB/day -|=== - - -Calculating total logging throughput and disk space required for your {product-title} cluster requires knowledge of your applications. For example, if one of your -applications on average logs 10 lines-per-second, each 256 bytes-per-line, -calculate per-application throughput and disk space as follows: - ----- - (bytes-per-line * (lines-per-second) = 2560 bytes per app per second - (2560) * (number-of-pods-per-node,100) = 256,000 bytes per second per node - 256k * (number-of-nodes) = total logging throughput per cluster ----- - -Fluentd ships any logs from *systemd journal* and */var/log/containers/* to Elasticsearch. - -//// -Local SSD drives are recommended in order to achieve the best performance. In -Red Hat Enterprise Linux (RHEL) 7, the -link:https://access.redhat.com/articles/425823[deadline] IO scheduler is the -default for all block devices except SATA disks. For SATA disks, the default IO -scheduler is *cfq*. -//// - -Therefore, consider how much data you need in advance and that you are -aggregating application log data. Some Elasticsearch users have found that it -is necessary to -link:https://signalfx.com/blog/how-we-monitor-and-run-elasticsearch-at-scale/[keep -absolute storage consumption around 50% and below 70% at all times]. This -helps to avoid Elasticsearch becoming unresponsive during large merge -operations. - -By default, at 85% ES stops allocating new data to the node, at 90% ES starts de-allocating -existing shards from that node to other nodes if possible. But if no nodes have -free capacity below 85% then ES will effectively reject creating new indices -and becomes RED. - -[NOTE] -==== -These low and high watermark values are Elasticsearch defaults in the current release. You can modify these values, -but you must also apply any modifications to the alerts also. The alerts are based -on these defaults. -==== diff --git a/modules/feature-gate-features.adoc b/modules/feature-gate-features.adoc deleted file mode 100644 index 70df7a59097f..000000000000 --- a/modules/feature-gate-features.adoc +++ /dev/null @@ -1,34 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/nodes-cluster-disabling-features.adoc -// * nodes/nodes-cluster-enabling-features.adoc - -[id="feature-gate-features_{context}"] -= Features that are affected by FeatureGates - -The following features are affected by FeatureGates: - -[options="header"] -|=== -| FeatureGate| Description| Default - -|`RotateKubeletServerCertificate` -|Enables the rotation of the server TLS certificate on the cluster. -|True - -|`SupportPodPidsLimit` -|Enables support for limiting the number of processes (PIDs) running in a pod. -|True - -|`MachineHealthCheck` -|Enables automatically repairing unhealthy machines in a machine pool. -|True - -|`LocalStorageCapacityIsolation` -|Enable the consumption of local ephemeral storage and also the `sizeLimit` property of an `emptyDir` volume. -|False - -|=== - -You can enable these features by editing the Feature Gate Custom Resource. -Turning on these features cannot be undone and prevents the ability to upgrade your cluster. diff --git a/modules/importing-manifest-list-through-imagestreamimport.adoc b/modules/importing-manifest-list-through-imagestreamimport.adoc deleted file mode 100644 index 395ad21eb4c5..000000000000 --- a/modules/importing-manifest-list-through-imagestreamimport.adoc +++ /dev/null @@ -1,44 +0,0 @@ -// Module included in the following assemblies: -// * openshift_images/image-streams-manage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="importing-manifest-list-through-imagestreamimport_{context}"] -= Importing a manifest list through ImageStreamImport - - -You can use the `ImageStreamImport` resource to find and import image manifests from other container image registries into the cluster. Individual images or an entire image repository can be imported. - -Use the following procedure to import a manifest list through the `ImageStreamImport` object with the `importMode` value. - -.Procedure - -. Create an `ImageStreamImport` YAML file and set the `importMode` parameter to `PreserveOriginal` on the tags that you will import as a manifest list: -+ -[source,yaml] ----- -apiVersion: image.openshift.io/v1 -kind: ImageStreamImport -metadata: - name: app - namespace: myapp -spec: - import: true - images: - - from: - kind: DockerImage - name: // - to: - name: latest - referencePolicy: - type: Source - importPolicy: - importMode: "PreserveOriginal" ----- - -. Create the `ImageStreamImport` by running the following command: -+ -[source,terminal] ----- -$ oc create -f ----- - diff --git a/modules/install-creating-install-config-aws-local-zones-subnets.adoc b/modules/install-creating-install-config-aws-local-zones-subnets.adoc deleted file mode 100644 index 7dfce6af0479..000000000000 --- a/modules/install-creating-install-config-aws-local-zones-subnets.adoc +++ /dev/null @@ -1,35 +0,0 @@ -// Module included in the following assemblies: -// * installing/installing_aws/installing-aws-localzone.adoc - -:_mod-docs-content-type: PROCEDURE -[id="install-creating-install-config-aws-local-zones-subnets_{context}"] -= Modifying an installation configuration file to use AWS Local Zones subnets - -Modify an `install-config.yaml` file to include AWS Local Zones subnets. - -.Prerequisites - -* You created subnets by using the procedure "Creating a subnet in AWS Local Zones". -* You created an `install-config.yaml` file by using the procedure "Creating the installation configuration file". - -.Procedure - -* Modify the `install-config.yaml` configuration file by specifying Local Zone subnets in the `platform.aws.subnets` property, as demonstrated in the following example: -+ -[source,yaml] ----- -... -platform: - aws: - region: us-west-2 - subnets: <1> - - publicSubnetId-1 - - publicSubnetId-2 - - publicSubnetId-3 - - privateSubnetId-1 - - privateSubnetId-2 - - privateSubnetId-3 - - publicSubnetId-LocalZone-1 -... ----- -<1> List of subnets created in the Availability and Local Zones. \ No newline at end of file diff --git a/modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc b/modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc deleted file mode 100644 index f46247c6a305..000000000000 --- a/modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// This module is included in the following assemblies: -// -// installing/installing_sno/install-sno-preparing-to-install-sno.adoc - -:_mod-docs-content-type: CONCEPT -[id="additional-requirements-for-installing-sno-on-a-cloud-provider_{context}"] -= Additional requirements for installing {sno} on a cloud provider - -The AWS documentation for installer-provisioned installation is written with a high availability cluster consisting of three control plane nodes. When referring to the AWS documentation, consider the differences between the requirements for a {sno} cluster and a high availability cluster. - -* The required machines for cluster installation in AWS documentation indicates a temporary bootstrap machine, three control plane machines, and at least two compute machines. You require only a temporary bootstrap machine and one AWS instance for the control plane node and no worker nodes. - -* The minimum resource requirements for cluster installation in the AWS documentation indicates a control plane node with 4 vCPUs and 100GB of storage. For a single node cluster, you must have a minimum of 8 vCPU cores and 120GB of storage. - -* The `controlPlane.replicas` setting in the `install-config.yaml` file should be set to `1`. - -* The `compute.replicas` setting in the `install-config.yaml` file should be set to `0`. -This makes the control plane node schedulable. diff --git a/modules/installation-about-custom.adoc b/modules/installation-about-custom.adoc deleted file mode 100644 index 8e26117c63b6..000000000000 --- a/modules/installation-about-custom.adoc +++ /dev/null @@ -1,52 +0,0 @@ -// Module included in the following assemblies: -// -// * orphaned - -[id="installation-about-custom_{context}"] -= About the custom installation - -You can use the {product-title} installation program to customize four levels -of the program: - -* {product-title} itself -* The cluster platform -* Kubernetes -* The cluster operating system - -Changes to {product-title} and its platform are managed and supported, but -changes to Kubernetes and the cluster operating system currently are not. If -you customize unsupported levels program levels, future installation and -upgrades might fail. - -When you select values for the prompts that the installation program presents, -you customize {product-title}. You can further modify the cluster platform -by modifying the `install-config.yaml` file that the installation program -uses to deploy your cluster. In this file, you can make changes like setting the -number of machines that the control plane uses, the type of virtual machine -that the cluster deploys, or the CIDR range for the Kubernetes service network. - -It is possible, but not supported, to modify the Kubernetes objects that are injected into the cluster. -A common modification is additional manifests in the initial installation. -No validation is available to confirm the validity of any modifications that -you make to these manifests, so if you modify these objects, you might render -your cluster non-functional. -[IMPORTANT] -==== -Modifying the Kubernetes objects is not supported. -==== - -Similarly it is possible, but not supported, to modify the -Ignition config files for the bootstrap and other machines. No validation is -available to confirm the validity of any modifications that -you make to these Ignition config files, so if you modify these objects, you might render -your cluster non-functional. - -[IMPORTANT] -==== -Modifying the Ignition config files is not supported. -==== - -To complete a custom installation, you use the installation program to generate -the installation files and then customize them. -The installation status is stored in a hidden -file in the asset directory and contains all of the installation files. diff --git a/modules/installation-aws-editing-manifests.adoc b/modules/installation-aws-editing-manifests.adoc deleted file mode 100644 index 6ceecf30842f..000000000000 --- a/modules/installation-aws-editing-manifests.adoc +++ /dev/null @@ -1,116 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_aws/installing-aws-outposts-remote-workers.adoc - -ifeval::["{context}" == "aws-compute-edge-zone-tasks"] -:post-aws-zones: -endif::[] - -:_mod-docs-content-type: PROCEDURE -[id="installation-aws-creating-manifests_{context}"] -= Generating manifest files - -Use the installation program to generate a set of manifest files in the assets directory. Manifest files are required to specify the AWS Outposts subnets to use for worker machines, and to specify settings required by the network provider. - -If you plan to reuse the `install-config.yaml` file, create a backup file before you generate the manifest files. - -.Procedure - -. Optional: Create a backup copy of the `install-config.yaml` file: -+ -[source,terminal] ----- -$ cp install-config.yaml install-config.yaml.backup ----- - -. Generate a set of manifests in your assets directory: -+ -[source,terminal] ----- -$ openshift-install create manifests --dir ----- -+ -This command displays the following messages. -+ -.Example output -[source,terminal] ----- -INFO Consuming Install Config from target directory -INFO Manifests created in: /manifests and /openshift ----- -+ -The command generates the following manifest files: -+ -.Example output -[source,terminal] ----- -$ tree -. -├── manifests -│  ├── cluster-config.yaml -│  ├── cluster-dns-02-config.yml -│  ├── cluster-infrastructure-02-config.yml -│  ├── cluster-ingress-02-config.yml -│  ├── cluster-network-01-crd.yml -│  ├── cluster-network-02-config.yml -│  ├── cluster-proxy-01-config.yaml -│  ├── cluster-scheduler-02-config.yml -│  ├── cvo-overrides.yaml -│  ├── kube-cloud-config.yaml -│  ├── kube-system-configmap-root-ca.yaml -│  ├── machine-config-server-tls-secret.yaml -│  └── openshift-config-secret-pull-secret.yaml -└── openshift - ├── 99_cloud-creds-secret.yaml - ├── 99_kubeadmin-password-secret.yaml - ├── 99_openshift-cluster-api_master-machines-0.yaml - ├── 99_openshift-cluster-api_master-machines-1.yaml - ├── 99_openshift-cluster-api_master-machines-2.yaml - ├── 99_openshift-cluster-api_master-user-data-secret.yaml - ├── 99_openshift-cluster-api_worker-machineset-0.yaml - ├── 99_openshift-cluster-api_worker-user-data-secret.yaml - ├── 99_openshift-machineconfig_99-master-ssh.yaml - ├── 99_openshift-machineconfig_99-worker-ssh.yaml - ├── 99_role-cloud-creds-secret-reader.yaml - └── openshift-install-manifests.yaml - ----- - -[id="installation-aws-editing-manifests_{context}"] -== Modifying manifest files - -[NOTE] -==== -The AWS Outposts environments has the following limitations which require manual modification in the manifest generated files: - -* The maximum transmission unit (MTU) of a network connection is the size, in bytes, of the largest permissible packet that can be passed over the connection. The Outpost service link supports a maximum packet size of 1300 bytes. For more information about the service link, see link:https://docs.aws.amazon.com/outposts/latest/userguide/region-connectivity.html[Outpost connectivity to AWS Regions] - -You will find more information about how to change these values below. -==== - -* Use Outpost Subnet for workers `machineset` -+ -Modify the following file: -/openshift/99_openshift-cluster-api_worker-machineset-0.yaml -Find the subnet ID and replace it with the ID of the private subnet created in the Outpost. As a result, all the worker machines will be created in the Outpost. - -* Specify MTU value for the Network Provider -+ -Outpost service links support a maximum packet size of 1300 bytes. You must modify the MTU of the Network Provider to follow this requirement. -Create a new file under the manifests directory and name the file `cluster-network-03-config.yml`. For the OVN-Kubernetes network provider, set the MTU value to 1200. -+ -[source,yaml] ----- -apiVersion: operator.openshift.io/v1 -kind: Network -metadata: - name: cluster -spec: - defaultNetwork: - ovnKubernetesConfig: - mtu: 1200 ----- - -ifeval::["{context}" == "aws-compute-edge-zone-tasks"] -:!post-aws-zones: -endif::[] diff --git a/modules/installation-azure-finalizing-encryption.adoc b/modules/installation-azure-finalizing-encryption.adoc deleted file mode 100644 index faeb1032349b..000000000000 --- a/modules/installation-azure-finalizing-encryption.adoc +++ /dev/null @@ -1,155 +0,0 @@ -//Module included in the following assemblies: -// -// * installing/installing_azure/installing-azure-customizations.adoc -// * installing/installing_azure/installing-azure-government-region.adoc -// * installing/installing_azure/installing-azure-network-customizations.adoc -// * installing/installing_azure/installing-azure-private.adoc -// * installing/installing_azure/installing-azure-vnet.adoc - - -ifeval::["{context}" == "installing-azure-customizations"] -:azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-government-region"] -:azure-gov: -endif::[] -ifeval::["{context}" == "installing-azure-network-customizations"] -:azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-private"] -:azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-vnet"] -:azure-public: -endif::[] - -:_mod-docs-content-type: PROCEDURE -[id="finalizing-encryption_{context}"] -= Finalizing user-managed encryption after installation -If you installed {product-title} using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. - -.Procedure - -. Obtain the identity of the cluster resource group used by the installer: -.. If you specified an existing resource group in `install-config.yaml`, obtain its Azure identity by running the following command: -+ -[source,terminal] ----- -$ az identity list --resource-group "" ----- -.. If you did not specify a existing resource group in `install-config.yaml`, locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: -+ -[source,terminal] ----- -$ az group list ----- -+ -[source,terminal] ----- -$ az identity list --resource-group "" ----- -+ -. Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: -+ -[source,terminal] ----- -$ az role assignment create --role "" \// <1> - --assignee "" <2> ----- -<1> Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the `Owner` role or a custom role with the necessary permissions. -<2> Specifies the identity of the cluster resource group. -+ -. Obtain the `id` of the disk encryption set you created prior to installation by running the following command: -+ -[source,terminal] ----- -$ az disk-encryption-set show -n \// <1> - --resource-group <2> ----- -<1> Specifies the name of the disk encryption set. -<2> Specifies the resource group that contains the disk encryption set. -The `id` is in the format of `"/subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/diskEncryptionSets/..."`. -+ -. Obtain the identity of the cluster service principal by running the following command: -+ -[source,terminal] ----- -$ az identity show -g \// <1> - -n \// <2> - --query principalId --out tsv ----- -<1> Specifies the name of the cluster resource group created by the installation program. -<2> Specifies the name of the cluster service principal created by the installation program. -The identity is in the format of `12345678-1234-1234-1234-1234567890`. -ifdef::azure-gov[] -. Create a role assignment that grants the cluster service principal `Contributor` privileges to the disk encryption set by running the following command: -+ -[source,terminal] ----- -$ az role assignment create --assignee \// <1> - --role 'Contributor' \// - --scope \// <2> ----- -<1> Specifies the ID of the cluster service principal obtained in the previous step. -<2> Specifies the ID of the disk encryption set. -endif::azure-gov[] -ifdef::azure-public[] -. Create a role assignment that grants the cluster service principal necessary privileges to the disk encryption set by running the following command: -+ -[source,terminal] ----- -$ az role assignment create --assignee \// <1> - --role \// <2> - --scope \// <3> ----- -<1> Specifies the ID of the cluster service principal obtained in the previous step. -<2> Specifies the Azure role name. You can use the `Contributor` role or a custom role with the necessary permissions. -<3> Specifies the ID of the disk encryption set. -endif::azure-public[] -+ -. Create a storage class that uses the user-managed disk encryption set: -.. Save the following storage class definition to a file, for example `storage-class-definition.yaml`: -+ -[source,yaml] ----- -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: managed-premium -provisioner: kubernetes.io/azure-disk -parameters: - skuname: Premium_LRS - kind: Managed - diskEncryptionSetID: "" <1> - resourceGroup: "" <2> -reclaimPolicy: Delete -allowVolumeExpansion: true -volumeBindingMode: WaitForFirstConsumer ----- -<1> Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example `"/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx"`. -<2> Specifies the name of the resource group used by the installer. This is the same resource group from the first step. -.. Create the storage class `managed-premium` from the file you created by running the following command: -+ -[source,terminal] ----- -$ oc create -f storage-class-definition.yaml ----- -. Select the `managed-premium` storage class when you create persistent volumes to use encrypted storage. - - - -ifeval::["{context}" == "installing-azure-customizations"] -:!azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-government-region"] -:!azure-gov: -endif::[] -ifeval::["{context}" == "installing-azure-network-customizations"] -:!azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-private"] -:!azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-vnet"] -:!azure-public: -endif::[] \ No newline at end of file diff --git a/modules/installation-creating-worker-machineset.adoc b/modules/installation-creating-worker-machineset.adoc deleted file mode 100644 index fab07717826c..000000000000 --- a/modules/installation-creating-worker-machineset.adoc +++ /dev/null @@ -1,144 +0,0 @@ -// Module included in the following assemblies: -// -// * none - -[id="installation-creating-worker-machineset_{context}"] -= Creating worker nodes that the cluster manages - -After your cluster initializes, you can create workers that are controlled by -a MachineSet in your Amazon Web Services (AWS) user-provisioned Infrastructure -cluster. - -.Prerequisites - -* Install a cluster on AWS using infrastructer that you provisioned. - -.Procedure - -. Optional: Launch worker nodes that are controlled by the machine API. -. View the list of MachineSets in the `openshift-machine-api` namespace: -+ ----- -$ oc get machinesets --namespace openshift-machine-api -NAME DESIRED CURRENT READY AVAILABLE AGE -test-tkh7l-worker-us-east-2a 1 1 11m -test-tkh7l-worker-us-east-2b 1 1 11m -test-tkh7l-worker-us-east-2c 1 1 11m ----- -+ -Note the `NAME` of each MachineSet. Because you use a different subnet than the -installation program expects, the worker MachineSets do not use the correct -network settings. You must edit each of these MachineSets. - -. Edit each worker MachineSet to provide the correct values for your cluster: -+ ----- -$ oc edit machineset --namespace openshift-machine-api test-tkh7l-worker-us-east-2a -o yaml -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - creationTimestamp: 2019-03-14T14:03:03Z - generation: 1 - labels: - machine.openshift.io/cluster-api-cluster: test-tkh7l - machine.openshift.io/cluster-api-machine-role: worker - machine.openshift.io/cluster-api-machine-type: worker - name: test-tkh7l-worker-us-east-2a - namespace: openshift-machine-api - resourceVersion: "2350" - selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machinesets/test-tkh7l-worker-us-east-2a - uid: e2a6c8a6-4661-11e9-a9b0-0296069fd3a2 -spec: - replicas: 1 - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: test-tkh7l - machine.openshift.io/cluster-api-machineset: test-tkh7l-worker-us-east-2a - template: - metadata: - creationTimestamp: null - labels: - machine.openshift.io/cluster-api-cluster: test-tkh7l - machine.openshift.io/cluster-api-machine-role: worker - machine.openshift.io/cluster-api-machine-type: worker - machine.openshift.io/cluster-api-machineset: test-tkh7l-worker-us-east-2a - spec: - metadata: - creationTimestamp: null - providerSpec: - value: - ami: - id: ami-07e0e0e0035b5a3fe <1> - apiVersion: awsproviderconfig.openshift.io/v1beta1 - blockDevices: - - ebs: - iops: 0 - volumeSize: 120 - volumeType: gp2 - credentialsSecret: - name: aws-cloud-credentials - deviceIndex: 0 - iamInstanceProfile: - id: test-tkh7l-worker-profile - instanceType: m4.large - kind: AWSMachineProviderConfig - metadata: - creationTimestamp: null - placement: - availabilityZone: us-east-2a - region: us-east-2 - publicIp: null - securityGroups: - - filters: - - name: tag:Name - values: - - test-tkh7l-worker-sg <2> - subnet: - filters: - - name: tag:Name - values: - - test-tkh7l-private-us-east-2a - tags: - - name: kubernetes.io/cluster/test-tkh7l - value: owned - userDataSecret: - name: worker-user-data - versions: - kubelet: "" -status: - fullyLabeledReplicas: 1 - observedGeneration: 1 - replicas: 1 ----- -<1> Specify the {op-system-first} AMI to use for your worker nodes. Use the same -value that you specified in the parameter values for your control plane and -bootstrap templates. -<2> Specify the name of the worker security group that you created in the form -`-worker-sg`. `` is the same -infrastructure name that you extracted from the Ignition config metadata, -which has the format `-`. - -//// -. Optional: Replace the `subnet` stanza with one that specifies the subnet -to deploy the machines on: -+ ----- -subnet: - filters: - - name: tag: <1> - values: - - test-tkh7l-private-us-east-2a <2> ----- -<1> Set the `` of the tag to `Name`, `ID`, or `ARN`. -<2> Specify the `Name`, `ID`, or `ARN` value for the subnet. This value must -match the `tag` type that you specify. -//// - -. View the machines in the `openshift-machine-api` namespace and confirm that -they are launching: -+ ----- -$ oc get machines --namespace openshift-machine-api -NAME INSTANCE STATE TYPE REGION ZONE AGE -test-tkh7l-worker-us-east-2a-hxlqn i-0e7f3a52b2919471e pending m4.4xlarge us-east-2 us-east-2a 3s ----- diff --git a/modules/installation-gcp-shared-vpc-ingress.adoc b/modules/installation-gcp-shared-vpc-ingress.adoc deleted file mode 100644 index 38aabec405af..000000000000 --- a/modules/installation-gcp-shared-vpc-ingress.adoc +++ /dev/null @@ -1,49 +0,0 @@ -// File included in the following assemblies: -// * installation/installing_gcp/installing-gcp-shared-vpc.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-gcp-shared-vpc-ingress_{context}"] -= Optional: Adding Ingress DNS records for shared VPC installations -If the public DNS zone exists in a host project outside the project where you installed your cluster, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard `*.apps.{baseDomain}.` or specific records. You can use A, CNAME, and other records per your requirements. - -.Prerequisites -* You completed the installation of {product-title} on GCP into a shared VPC. -* Your public DNS zone exists in a host project separate from the service project that contains your cluster. - -.Procedure -. Verify that the Ingress router has created a load balancer and populated the `EXTERNAL-IP` field by running the following command: -+ -[source,terminal] ----- -$ oc -n openshift-ingress get service router-default ----- -+ -.Example output -[source,terminal] ----- -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 ----- -. Record the external IP address of the router by running the following command: -+ -[source,terminal] ----- -$ oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}' ----- -. Add a record to your GCP public zone with the router's external IP address and the name `*.apps..`. You can use the `gcloud` command-line utility or the GCP web console. -. To add manual records instead of a wildcard record, create entries for each of the cluster's current routes. You can gather these routes by running the following command: -+ -[source,terminal] ----- -$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes ----- -+ -.Example output -[source,terminal] ----- -oauth-openshift.apps.your.cluster.domain.example.com -console-openshift-console.apps.your.cluster.domain.example.com -downloads-openshift-console.apps.your.cluster.domain.example.com -alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com -prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com ----- diff --git a/modules/installation-identify-supported-aws-outposts-instance-types.adoc b/modules/installation-identify-supported-aws-outposts-instance-types.adoc deleted file mode 100644 index fbb9ee8ee8b7..000000000000 --- a/modules/installation-identify-supported-aws-outposts-instance-types.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Module included in the following assemblies: -// -// installing/installing_aws/installing-aws-outposts-remote-workers.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-identify-supported-aws-outposts-instance-types_{context}"] -= Identifying your AWS Outposts instance types - -AWS Outposts rack catalog includes options supporting the latest generation Intel powered EC2 instance types with or without local instance storage. -Identify which instance types are configured in your AWS Outpost instance. As part of the installation process, you must update the `install-config.yaml` file with the instance type that the installation program will use to deploy worker nodes. - -.Procedure - -Use the AWS CLI to get the list of supported instance types by running the following command: -[source,terminal] ----- -$ aws outposts get-outpost-instance-types --outpost-id <1> ----- -<1> For ``, specify the Outpost ID, used in the AWS account for the worker instances - -+ -[IMPORTANT] -==== -When you purchase capacity for your AWS Outpost instance, you specify an EC2 capacity layout that each server provides. Each server supports a single family of instance types. A layout can offer a single instance type or multiple instance types. Dedicated Hosts allows you to alter whatever you chose for that initial layout. If you allocate a host to support a single instance type for the entire capacity, you can only start a single instance type from that host. -==== - -Supported instance types in AWS Outposts might be changed. For more information, you can check the link:https://aws.amazon.com/outposts/rack/features/#Compute_and_storage[Compute and Storage] page in AWS Outposts documents. diff --git a/modules/installation-localzone-generate-k8s-manifest.adoc b/modules/installation-localzone-generate-k8s-manifest.adoc deleted file mode 100644 index 7243d4225873..000000000000 --- a/modules/installation-localzone-generate-k8s-manifest.adoc +++ /dev/null @@ -1,176 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_aws/installing-aws-localzone.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-localzone-generate-k8s-manifest_{context}"] -= Creating the Kubernetes manifest files - -Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest files that the cluster needs to configure the machines. - -.Prerequisites - -* You obtained the {product-title} installation program. -* You created the `install-config.yaml` installation configuration file. -* You installed the `jq` package. - -.Procedure - -. Change to the directory that contains the {product-title} installation program and generate the Kubernetes manifests for the cluster by running the following command: -+ -[source,terminal] ----- -$ ./openshift-install create manifests --dir <1> ----- -+ -<1> For ``, specify the installation directory that -contains the `install-config.yaml` file you created. - -. Set the default Maximum Transmission Unit (MTU) according to the network plugin: -+ -[IMPORTANT] -==== -Generally, the Maximum Transmission Unit (MTU) between an Amazon EC2 instance in a Local Zone and an Amazon EC2 instance in the Region is 1300. See link:https://docs.aws.amazon.com/local-zones/latest/ug/how-local-zones-work.html[How Local Zones work] in the AWS documentation. -The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead for the OVN-Kubernetes networking plugin is `100 bytes`. - -The network plugin could provide additional features, like IPsec, that also must decrease the MTU. Check the documentation for additional information. - -==== - -.. For the `OVN-Kubernetes` network plugin, enter the following command: -+ -[source,terminal] ----- -$ cat < /manifests/cluster-network-03-config.yml -apiVersion: operator.openshift.io/v1 -kind: Network -metadata: - name: cluster -spec: - defaultNetwork: - ovnKubernetesConfig: - mtu: 1200 -EOF ----- - -. Create the machine set manifests for the worker nodes in your Local Zone. -.. Export a local variable that contains the name of the Local Zone that you opted your AWS account into by running the following command: -+ -[source,terminal] ----- -$ export LZ_ZONE_NAME="" <1> ----- -<1> For ``, specify the Local Zone that you opted your AWS account into, such as `us-east-1-nyc-1a`. - -.. Review the instance types for the location that you will deploy to by running the following command: -+ -[source,terminal] ----- -$ aws ec2 describe-instance-type-offerings \ - --location-type availability-zone \ - --filters Name=location,Values=${LZ_ZONE_NAME} - --region <1> ----- -<1> For ``, specify the name of the region that you will deploy to, such as `us-east-1`. - -.. Export a variable to define the instance type for the worker machines to deploy on the Local Zone subnet by running the following command: -+ -[source,terminal] ----- -$ export INSTANCE_TYPE="" <1> ----- -<1> Set `` to a tested instance type, such as `c5d.2xlarge`. - -.. Store the AMI ID as a local variable by running the following command: -+ -[source,terminal] ----- -$ export AMI_ID=$(grep ami - /openshift/99_openshift-cluster-api_worker-machineset-0.yaml \ - | tail -n1 | awk '{print$2}') ----- - -.. Store the subnet ID as a local variable by running the following command: -+ -[source,terminal] ----- -$ export SUBNET_ID=$(aws cloudformation describe-stacks --stack-name "" \ <1> - | jq -r '.Stacks[0].Outputs[0].OutputValue') ----- -<1> For ``, specify the name of the subnet stack that you created. - -.. Store the cluster ID as local variable by running the following command: -+ -[source,terminal] ----- -$ export CLUSTER_ID="$(awk '/infrastructureName: / {print $2}' /manifests/cluster-infrastructure-02-config.yml)" ----- - -.. Create the worker manifest file for the Local Zone that your VPC uses by running the following command: -+ -[source,terminal] ----- -$ cat < /openshift/99_openshift-cluster-api_worker-machineset-nyc1.yaml -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - labels: - machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID} - name: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME} - namespace: openshift-machine-api -spec: - replicas: 1 - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID} - machine.openshift.io/cluster-api-machineset: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME} - template: - metadata: - labels: - machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID} - machine.openshift.io/cluster-api-machine-role: edge - machine.openshift.io/cluster-api-machine-type: edge - machine.openshift.io/cluster-api-machineset: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME} - spec: - metadata: - labels: - zone_type: local-zone - zone_group: ${LZ_ZONE_NAME:0:-1} - node-role.kubernetes.io/edge: "" - taints: - - key: node-role.kubernetes.io/edge - effect: NoSchedule - providerSpec: - value: - ami: - id: ${AMI_ID} - apiVersion: machine.openshift.io/v1beta1 - blockDevices: - - ebs: - volumeSize: 120 - volumeType: gp2 - credentialsSecret: - name: aws-cloud-credentials - deviceIndex: 0 - iamInstanceProfile: - id: ${CLUSTER_ID}-worker-profile - instanceType: ${INSTANCE_TYPE} - kind: AWSMachineProviderConfig - placement: - availabilityZone: ${LZ_ZONE_NAME} - region: ${CLUSTER_REGION} - securityGroups: - - filters: - - name: tag:Name - values: - - ${CLUSTER_ID}-worker-sg - subnet: - id: ${SUBNET_ID} - publicIp: true - tags: - - name: kubernetes.io/cluster/${CLUSTER_ID} - value: owned - userDataSecret: - name: worker-user-data -EOF ----- diff --git a/modules/installation-osp-troubleshooting.adoc b/modules/installation-osp-troubleshooting.adoc deleted file mode 100644 index 8b5bcff20bd9..000000000000 --- a/modules/installation-osp-troubleshooting.adoc +++ /dev/null @@ -1,40 +0,0 @@ -// Module included in the following assemblies: -// -// * n/a - -[id="installation-osp-customizing_{context}"] - -= Troubleshooting {product-title} on OpenStack installations - -// Structure as needed in the end. This is very much a WIP. -// A few more troubleshooting and/or known issues blurbs incoming - -Unfortunately, there will always be some cases where {product-title} fails to install properly. In these events, it is helpful to understand the likely failure modes as well as how to troubleshoot the failure. - -This document discusses some troubleshooting options for {rh-openstack}-based -deployments. For general tips on troubleshooting the installation program, see the [Installer Troubleshooting](../troubleshooting.md) guide. - -== View instance logs - -{rh-openstack} CLI tools must be installed, then: - ----- -$ openstack console log show ----- - -== Connect to instances via SSH - -Get the IP address of the machine on the private network: -``` -openstack server list | grep master -| 0dcd756b-ad80-42f1-987a-1451b1ae95ba | cluster-wbzrr-master-1 | ACTIVE | cluster-wbzrr-openshift=172.24.0.21 | rhcos | m1.s2.xlarge | -| 3b455e43-729b-4e64-b3bd-1d4da9996f27 | cluster-wbzrr-master-2 | ACTIVE | cluster-wbzrr-openshift=172.24.0.18 | rhcos | m1.s2.xlarge | -| 775898c3-ecc2-41a4-b98b-a4cd5ae56fd0 | cluster-wbzrr-master-0 | ACTIVE | cluster-wbzrr-openshift=172.24.0.12 | rhcos | m1.s2.xlarge | -``` - -And connect to it using the control plane machine currently holding the API as a jumpbox: - -``` -ssh -J core@${floating IP address}<1> core@ -``` -<1> The floating IP address assigned to the control plane machine. diff --git a/modules/installation-requirements-user-infra-ibm-z-kvm.adoc b/modules/installation-requirements-user-infra-ibm-z-kvm.adoc deleted file mode 100644 index f35f57823fea..000000000000 --- a/modules/installation-requirements-user-infra-ibm-z-kvm.adoc +++ /dev/null @@ -1,195 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_ibm_z/installing-ibm-z-kvm.adoc - - -:_mod-docs-content-type: CONCEPT -[id="installation-requirements-user-infra_{context}"] -= Machine requirements for a cluster with user-provisioned infrastructure - -For a cluster that contains user-provisioned infrastructure, you must deploy all -of the required machines. - -One or more KVM host machines based on {op-system-base} 8.6 or later. Each {op-system-base} KVM host machine must have libvirt installed and running. The virtual machines are provisioned under each {op-system-base} KVM host machine. - - -[id="machine-requirements_{context}"] -== Required machines - -The smallest {product-title} clusters require the following hosts: - -.Minimum required hosts -[options="header"] -|=== -|Hosts |Description - -|One temporary bootstrap machine -|The cluster requires the bootstrap machine to deploy the {product-title} cluster -on the three control plane machines. You can remove the bootstrap machine after -you install the cluster. -|Three control plane machines -|The control plane machines run the Kubernetes and {product-title} services that form the control plane. - -|At least two compute machines, which are also known as worker machines. -|The workloads requested by {product-title} users run on the compute machines. - -|=== - -[IMPORTANT] -==== -To improve high availability of your cluster, distribute the control plane machines over different {op-system-base} instances on at least two physical machines. -==== - -The bootstrap, control plane, and compute machines must use {op-system-first} as the operating system. - -See link:https://access.redhat.com/articles/rhel-limits[Red Hat Enterprise Linux technology capabilities and limits]. - -[id="network-connectivity_{context}"] -== Network connectivity requirements - -The {product-title} installer creates the Ignition files, which are necessary for all the {op-system-first} virtual machines. The automated installation of {product-title} is performed by the bootstrap machine. It starts the installation of {product-title} on each node, starts the Kubernetes cluster, and then finishes. During this bootstrap, the virtual machine must have an established network connection either through a Dynamic Host Configuration Protocol (DHCP) server or static IP address. - -[id="ibm-z-network-connectivity_{context}"] -== {ibm-z-title} network connectivity requirements - -To install on {ibm-z-name} under {op-system-base} KVM, you need: - -* A {op-system-base} KVM host configured with an OSA or RoCE network adapter. -* Either a {op-system-base} KVM host that is configured to use bridged networking in libvirt or MacVTap to connect the network to the guests. -+ -See link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/configuring_and_managing_virtualization/index#types-of-virtual-machine-network-connections_configuring-virtual-machine-network-connections[Types of virtual machine network connections]. - -[id="host-machine-resource-requirements_{context}"] -== Host machine resource requirements -The {op-system-base} KVM host in your environment must meet the following requirements to host the virtual machines that you plan for the {product-title} environment. See link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/configuring_and_managing_virtualization/index#enabling-virtualization-on-ibm-z_assembly_enabling-virtualization-in-rhel-9[Enabling virtualization on {ibm-z-name}]. - -You can install {product-title} version {product-version} on the following {ibm-name} hardware: - -* {ibm-name} z16 (all models), {ibm-name} z15 (all models), {ibm-name} z14 (all models) -* {ibm-linuxone-name} 4 (all models), {ibm-linuxone-name} III (all models), {ibm-linuxone-name} Emperor II, {ibm-linuxone-name} Rockhopper II - -[id="minimum-ibm-z-system-requirements_{context}"] -== Minimum {ibm-z-title} system environment - - -=== Hardware requirements - -* The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. -* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster. - -[NOTE] -==== -You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of {ibm-z-name}. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster. -==== - -[IMPORTANT] -==== -Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. -==== - - -=== Operating system requirements -* One LPAR running on {op-system-base} 8.6 or later with KVM, which is managed by libvirt - -On your {op-system-base} KVM host, set up: - -* Three guest virtual machines for {product-title} control plane machines -* Two guest virtual machines for {product-title} compute machines -* One guest virtual machine for the temporary {product-title} bootstrap machine - -[id="minimum-resource-requirements_{context}"] -== Minimum resource requirements - -Each cluster virtual machine must meet the following minimum requirements: - -[cols="2,2,2,2,2,2",options="header"] -|=== - -|Virtual Machine -|Operating System -|vCPU ^[1]^ -|Virtual RAM -|Storage -|IOPS - -|Bootstrap -|{op-system} -|4 -|16 GB -|100 GB -|N/A - -|Control plane -|{op-system} -|4 -|16 GB -|100 GB -|N/A - -|Compute -|{op-system} -|2 -|8 GB -|100 GB -|N/A - -|=== -[.small] --- -1. One physical core (IFL) provides two logical cores (threads) when SMT-2 is enabled. The hypervisor can provide two or more vCPUs. --- - -[id="preferred-ibm-z-system-requirements_{context}"] -== Preferred {ibm-z-title} system environment - - -=== Hardware requirements - -* Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. -* Two network connections to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster. - - -=== Operating system requirements - -* For high availability, two or three LPARs running on {op-system-base} 8.6 or later with KVM, which are managed by libvirt. - -On your {op-system-base} KVM host, set up: - -* Three guest virtual machines for {product-title} control plane machines, distributed across the {op-system-base} KVM host machines. -* At least six guest virtual machines for {product-title} compute machines, distributed across the {op-system-base} KVM host machines. -* One guest virtual machine for the temporary {product-title} bootstrap machine. -* To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using `cpu_shares`. Do the same for infrastructure nodes, if they exist. See link:https://www.ibm.com/docs/en/linux-on-systems?topic=domain-schedinfo[schedinfo] in {ibm-name} Documentation. - -[id="preferred-resource-requirements_{context}"] -== Preferred resource requirements - -The preferred requirements for each cluster virtual machine are: - -[cols="2,2,2,2,2",options="header"] -|=== - -|Virtual Machine -|Operating System -|vCPU -|Virtual RAM -|Storage - -|Bootstrap -|{op-system} -|4 -|16 GB -|120 GB - -|Control plane -|{op-system} -|8 -|16 GB -|120 GB - -|Compute -|{op-system} -|6 -|8 GB -|120 GB - -|=== diff --git a/modules/installing-gitops-operator-in-web-console.adoc b/modules/installing-gitops-operator-in-web-console.adoc deleted file mode 100644 index ccf482d82ee6..000000000000 --- a/modules/installing-gitops-operator-in-web-console.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module is included in the following assemblies: -// -// * /cicd/gitops/installing-openshift-gitops.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installing-gitops-operator-in-web-console_{context}"] -= Installing {gitops-title} Operator in web console - -.Procedure - -. Open the *Administrator* perspective of the web console and navigate to *Operators* → *OperatorHub* in the menu on the left. - -. Search for `OpenShift GitOps`, click the *{gitops-title}* tile, and then click *Install*. -+ -{gitops-title} will be installed in all namespaces of the cluster. - -After the {gitops-title} Operator is installed, it automatically sets up a ready-to-use Argo CD instance that is available in the `openshift-gitops` namespace, and an Argo CD icon is displayed in the console toolbar. -You can create subsequent Argo CD instances for your applications under your projects. diff --git a/modules/installing-gitops-operator-using-cli.adoc b/modules/installing-gitops-operator-using-cli.adoc deleted file mode 100644 index 5187cd65600d..000000000000 --- a/modules/installing-gitops-operator-using-cli.adoc +++ /dev/null @@ -1,60 +0,0 @@ -// Module is included in the following assemblies: -// -// * /cicd/gitops/installing-openshift-gitops.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installing-gitops-operator-using-cli_{context}"] -= Installing {gitops-title} Operator using CLI - -[role="_abstract"] -You can install {gitops-title} Operator from the OperatorHub using the CLI. - -.Procedure - -. Create a Subscription object YAML file to subscribe a namespace to the {gitops-title}, for example, `sub.yaml`: -+ -.Example Subscription -[source,yaml] ----- -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: openshift-gitops-operator - namespace: openshift-operators -spec: - channel: latest <1> - installPlanApproval: Automatic - name: openshift-gitops-operator <2> - source: redhat-operators <3> - sourceNamespace: openshift-marketplace <4> ----- -<1> Specify the channel name from where you want to subscribe the Operator. -<2> Specify the name of the Operator to subscribe to. -<3> Specify the name of the CatalogSource that provides the Operator. -<4> The namespace of the CatalogSource. Use `openshift-marketplace` for the default OperatorHub CatalogSources. -+ -. Apply the `Subscription` to the cluster: -+ -[source,terminal] ----- -$ oc apply -f openshift-gitops-sub.yaml ----- -. After the installation is complete, ensure that all the pods in the `openshift-gitops` namespace are running: -+ -[source,terminal] ----- -$ oc get pods -n openshift-gitops ----- -.Example output -+ -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -cluster-b5798d6f9-zr576 1/1 Running 0 65m -kam-69866d7c48-8nsjv 1/1 Running 0 65m -openshift-gitops-application-controller-0 1/1 Running 0 53m -openshift-gitops-applicationset-controller-6447b8dfdd-5ckgh 1/1 Running 0 65m -openshift-gitops-redis-74bd8d7d96-49bjf 1/1 Running 0 65m -openshift-gitops-repo-server-c999f75d5-l4rsg 1/1 Running 0 65m -openshift-gitops-server-5785f7668b-wj57t 1/1 Running 0 53m ----- \ No newline at end of file diff --git a/modules/machine-configs-and-pools.adoc b/modules/machine-configs-and-pools.adoc deleted file mode 100644 index d627624ec571..000000000000 --- a/modules/machine-configs-and-pools.adoc +++ /dev/null @@ -1,75 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="machine-configs-and-pools_{context}"] -= Machine Configs and Machine Config Pools -Machine Config Pools manage a cluster of nodes and their corresponding -Machine Configs. Machine Configs contain configuration information for a -cluster. - -To list all Machine Config Pools that are known: - ----- -$ oc get machineconfigpools -NAME CONFIG UPDATED UPDATING DEGRADED -master master-1638c1aea398413bb918e76632f20799 False False False -worker worker-2feef4f8288936489a5a832ca8efe953 False False False ----- - -To list all Machine Configs: ----- -$ oc get machineconfig -NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL -00-master 4.0.0-0.150.0.0-dirty 2.2.0 16m -00-master-ssh 4.0.0-0.150.0.0-dirty 16m -00-worker 4.0.0-0.150.0.0-dirty 2.2.0 16m -00-worker-ssh 4.0.0-0.150.0.0-dirty 16m -01-master-kubelet 4.0.0-0.150.0.0-dirty 2.2.0 16m -01-worker-kubelet 4.0.0-0.150.0.0-dirty 2.2.0 16m -master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 2.2.0 16m -worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 2.2.0 16m ----- - -To list all KubeletConfigs: - ----- -$ oc get kubeletconfigs ----- - -To get more detailed information about a KubeletConfig, including the reason for -the current condition: - ----- -$ oc describe kubeletconfig ----- - -For example: - ----- -# oc describe kubeletconfig set-max-pods - -Name: set-max-pods <1> -Namespace: -Labels: -Annotations: -API Version: machineconfiguration.openshift.io/v1 -Kind: KubeletConfig -Metadata: - Creation Timestamp: 2019-02-05T16:27:20Z - Generation: 1 - Resource Version: 19694 - Self Link: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/set-max-pods - UID: e8ee6410-2962-11e9-9bcc-664f163f5f0f -Spec: - Kubelet Config: <2> - Max Pods: 100 - Machine Config Pool Selector: <3> - Match Labels: - Custom - Kubelet: small-pods -Events: ----- - -<1> The name of the KubeletConfig. -<2> The user defined configuration. -<3> The Machine Config Pool selector to apply the KubeletConfig to. \ No newline at end of file diff --git a/modules/machineset-osp-adding-bare-metal.adoc b/modules/machineset-osp-adding-bare-metal.adoc deleted file mode 100644 index bec69cb21cb5..000000000000 --- a/modules/machineset-osp-adding-bare-metal.adoc +++ /dev/null @@ -1,90 +0,0 @@ -:_mod-docs-content-type: PROCEDURE -[id="machineset-osp-adding-bare-metal_{context}"] -= Adding bare-metal compute machines to a {rh-openstack} cluster - -// TODO -// Mothballed -// Reintroduce when feature is available. -You can add bare-metal compute machines to an {product-title} cluster after you deploy it -on {rh-openstack-first}. In this configuration, all machines are attached to an -existing, installer-provisioned network, and traffic between control plane and -compute machines is routed between subnets. - -.Prerequisites - -* The {rh-openstack} link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/bare_metal_provisioning/index[Bare Metal service (Ironic)] is enabled and accessible by using the {rh-openstack} Compute API. - -* Bare metal is available as link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_bare_metal_provisioning_service/assembly_configuring-the-bare-metal-provisioning-service-after-deployment#proc_creating-flavors-for-launching-bare-metal-instances_bare-metal-post-deployment[an {rh-openstack} flavor]. - -* You deployed an {product-title} cluster on installer-provisioned infrastructure. - -* Your {rh-openstack} cloud provider is configured to route traffic between the installer-created VM -subnet and the pre-existing bare metal subnet. - -.Procedure -. Create a file called `baremetalMachineSet.yaml`, and then add the bare metal flavor to it: -+ -FIXME: May require update before publication. -.A sample bare metal MachineSet file -[source,yaml] ----- -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - labels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machine-role: - machine.openshift.io/cluster-api-machine-type: - name: - - namespace: openshift-machine-api -spec: - replicas: - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machineset: - - template: - metadata: - labels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machine-role: - machine.openshift.io/cluster-api-machine-type: - machine.openshift.io/cluster-api-machineset: - - spec: - providerSpec: - value: - apiVersion: openstackproviderconfig.openshift.io/v1alpha1 - cloudName: openstack - cloudsSecret: - name: openstack-cloud-credentials - namespace: openshift-machine-api - flavor: - image: - kind: OpenstackProviderSpec - networks: - - filter: {} - subnets: - - filter: - name: - tags: openshiftClusterID= - securityGroups: - - filter: {} - name: - - serverMetadata: - Name: - - openshiftClusterID: - tags: - - openshiftClusterID= - trunk: true - userDataSecret: - name: -user-data ----- - -. On a command line, to create the MachineSet resource, type: -+ -[source,terminal] ----- -$ oc create -v baremetalMachineSet.yaml ----- - -You can now use bare-metal compute machines in your {product-title} cluster. diff --git a/modules/metering-cluster-capacity-examples.adoc b/modules/metering-cluster-capacity-examples.adoc deleted file mode 100644 index 3bc78d3e5d22..000000000000 --- a/modules/metering-cluster-capacity-examples.adoc +++ /dev/null @@ -1,48 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-usage-examples.adoc - -[id="metering-cluster-capacity-examples_{context}"] -= Measure cluster capacity hourly and daily - -The following report demonstrates how to measure cluster capacity both hourly and daily. The daily report works by aggregating the hourly report's results. - -The following report measures cluster CPU capacity every hour. - -.Hourly CPU capacity by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-capacity-hourly -spec: - query: "cluster-cpu-capacity" - schedule: - period: "hourly" <1> ----- -<1> You could change this period to `daily` to get a daily report, but with larger data sets it is more efficient to use an hourly report, then aggregate your hourly data into a daily report. - -The following report aggregates the hourly data into a daily report. - -.Daily CPU capacity by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-capacity-daily <1> -spec: - query: "cluster-cpu-capacity" <2> - inputs: <3> - - name: ClusterCpuCapacityReportName - value: cluster-cpu-capacity-hourly - schedule: - period: "daily" ----- - -<1> To stay organized, remember to change the `name` of your report if you change any of the other values. -<2> You can also measure `cluster-memory-capacity`. Remember to update the query in the associated hourly report as well. -<3> The `inputs` section configures this report to aggregate the hourly report. Specifically, `value: cluster-cpu-capacity-hourly` is the name of the hourly report that gets aggregated. diff --git a/modules/metering-cluster-usage-examples.adoc b/modules/metering-cluster-usage-examples.adoc deleted file mode 100644 index ed6188e8ca3b..000000000000 --- a/modules/metering-cluster-usage-examples.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-usage-examples.adoc - -[id="metering-cluster-usage-examples_{context}"] -= Measure cluster usage with a one-time report - -The following report measures cluster usage from a specific starting date forward. The report only runs once, after you save it and apply it. - -.CPU usage by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-usage-2020 <1> -spec: - reportingStart: '2020-01-01T00:00:00Z' <2> - reportingEnd: '2020-12-30T23:59:59Z' - query: cluster-cpu-usage <3> - runImmediately: true <4> ----- -<1> To stay organized, remember to change the `name` of your report if you change any of the other values. -<2> Configures the report to start using data from the `reportingStart` timestamp until the `reportingEnd` timestamp. -<3> Adjust your query here. You can also measure cluster usage with the `cluster-memory-usage` query. -<4> Configures the report to run immediately after saving it and applying it. diff --git a/modules/metering-cluster-utilization-examples.adoc b/modules/metering-cluster-utilization-examples.adoc deleted file mode 100644 index 4c1856b5217f..000000000000 --- a/modules/metering-cluster-utilization-examples.adoc +++ /dev/null @@ -1,26 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-usage-examples.adoc - -[id="metering-cluster-utilization-examples_{context}"] -= Measure cluster utilization using cron expressions - -You can also use cron expressions when configuring the period of your reports. The following report measures cluster utilization by looking at CPU utilization from 9am-5pm every weekday. - -.Weekday CPU utilization by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-utilization-weekdays <1> -spec: - query: "cluster-cpu-utilization" <2> - schedule: - period: "cron" - expression: 0 0 * * 1-5 <3> ----- -<1> To say organized, remember to change the `name` of your report if you change any of the other values. -<2> Adjust your query here. You can also measure cluster utilization with the `cluster-memory-utilization` query. -<3> For cron periods, normal cron expressions are valid. diff --git a/modules/metering-configure-persistentvolumes.adoc b/modules/metering-configure-persistentvolumes.adoc deleted file mode 100644 index 418782ec8b2b..000000000000 --- a/modules/metering-configure-persistentvolumes.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-hive-metastore.adoc - -[id="metering-configure-persistentvolumes_{context}"] -= Configuring persistent volumes - -By default, Hive requires one persistent volume to operate. - -`hive-metastore-db-data` is the main persistent volume claim (PVC) required by default. This PVC is used by the Hive metastore to store metadata about tables, such as table name, columns, and location. Hive metastore is used by Presto and the Hive server to look up table metadata when processing queries. You remove this requirement by using MySQL or PostgreSQL for the Hive metastore database. - -To install, Hive metastore requires that dynamic volume provisioning is enabled in a storage class, a persistent volume of the correct size must be manually pre-created, or you use a pre-existing MySQL or PostgreSQL database. - -[id="metering-configure-persistentvolumes-storage-class-hive_{context}"] -== Configuring the storage class for the Hive metastore -To configure and specify a storage class for the `hive-metastore-db-data` persistent volume claim, specify the storage class in your `MeteringConfig` custom resource. An example `storage` section with the `class` field is included in the `metastore-storage.yaml` file below. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - hive: - spec: - metastore: - storage: - # Default is null, which means using the default storage class if it exists. - # If you wish to use a different storage class, specify it here - # class: "null" <1> - size: "5Gi" ----- -<1> Uncomment this line and replace `null` with the name of the storage class to use. Leaving the value `null` will cause metering to use the default storage class for the cluster. - -[id="metering-configure-persistentvolumes-volume-size-hive_{context}"] -== Configuring the volume size for the Hive metastore - -Use the `metastore-storage.yaml` file below as a template to configure the volume size for the Hive metastore. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - hive: - spec: - metastore: - storage: - # Default is null, which means using the default storage class if it exists. - # If you wish to use a different storage class, specify it here - # class: "null" - size: "5Gi" <1> ----- -<1> Replace the value for `size` with your desired capacity. The example file shows "5Gi". diff --git a/modules/metering-debugging.adoc b/modules/metering-debugging.adoc deleted file mode 100644 index dab3a52a1eb4..000000000000 --- a/modules/metering-debugging.adoc +++ /dev/null @@ -1,228 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-troubleshooting-debugging.adoc - -[id="metering-debugging_{context}"] -= Debugging metering - -Debugging metering is much easier when you interact directly with the various components. The sections below detail how you can connect and query Presto and Hive as well as view the dashboards of the Presto and HDFS components. - -[NOTE] -==== -All of the commands in this section assume you have installed metering through OperatorHub in the `openshift-metering` namespace. -==== - -[id="metering-get-reporting-operator-logs_{context}"] -== Get reporting operator logs -Use the command below to follow the logs of the `reporting-operator`: - -[source,terminal] ----- -$ oc -n openshift-metering logs -f "$(oc -n openshift-metering get pods -l app=reporting-operator -o name | cut -c 5-)" -c reporting-operator ----- - -[id="metering-query-presto-using-presto-cli_{context}"] -== Query Presto using presto-cli -The following command opens an interactive presto-cli session where you can query Presto. This session runs in the same container as Presto and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Presto pod. - -By default, Presto is configured to communicate using TLS. You must use the following command to run Presto queries: - -[source,terminal] ----- -$ oc -n openshift-metering exec -it "$(oc -n openshift-metering get pods -l app=presto,presto=coordinator -o name | cut -d/ -f2)" \ - -- /usr/local/bin/presto-cli --server https://presto:8080 --catalog hive --schema default --user root --keystore-path /opt/presto/tls/keystore.pem ----- - -Once you run this command, a prompt appears where you can run queries. Use the `show tables from metering;` query to view the list of tables: - -[source,terminal] ----- -$ presto:default> show tables from metering; ----- - -.Example output -[source,terminal] ----- - Table - - datasource_your_namespace_cluster_cpu_capacity_raw - datasource_your_namespace_cluster_cpu_usage_raw - datasource_your_namespace_cluster_memory_capacity_raw - datasource_your_namespace_cluster_memory_usage_raw - datasource_your_namespace_node_allocatable_cpu_cores - datasource_your_namespace_node_allocatable_memory_bytes - datasource_your_namespace_node_capacity_cpu_cores - datasource_your_namespace_node_capacity_memory_bytes - datasource_your_namespace_node_cpu_allocatable_raw - datasource_your_namespace_node_cpu_capacity_raw - datasource_your_namespace_node_memory_allocatable_raw - datasource_your_namespace_node_memory_capacity_raw - datasource_your_namespace_persistentvolumeclaim_capacity_bytes - datasource_your_namespace_persistentvolumeclaim_capacity_raw - datasource_your_namespace_persistentvolumeclaim_phase - datasource_your_namespace_persistentvolumeclaim_phase_raw - datasource_your_namespace_persistentvolumeclaim_request_bytes - datasource_your_namespace_persistentvolumeclaim_request_raw - datasource_your_namespace_persistentvolumeclaim_usage_bytes - datasource_your_namespace_persistentvolumeclaim_usage_raw - datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw - datasource_your_namespace_pod_cpu_request_raw - datasource_your_namespace_pod_cpu_usage_raw - datasource_your_namespace_pod_limit_cpu_cores - datasource_your_namespace_pod_limit_memory_bytes - datasource_your_namespace_pod_memory_request_raw - datasource_your_namespace_pod_memory_usage_raw - datasource_your_namespace_pod_persistentvolumeclaim_request_info - datasource_your_namespace_pod_request_cpu_cores - datasource_your_namespace_pod_request_memory_bytes - datasource_your_namespace_pod_usage_cpu_cores - datasource_your_namespace_pod_usage_memory_bytes -(32 rows) - -Query 20210503_175727_00107_3venm, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [32 rows, 2.23KB] [19 rows/s, 1.37KB/s] - -presto:default> ----- - -[id="metering-query-hive-using-beeline_{context}"] -== Query Hive using beeline -The following opens an interactive beeline session where you can query Hive. This session runs in the same container as Hive and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Hive pod. - -[source,terminal] ----- -$ oc -n openshift-metering exec -it $(oc -n openshift-metering get pods -l app=hive,hive=server -o name | cut -d/ -f2) \ - -c hiveserver2 -- beeline -u 'jdbc:hive2://127.0.0.1:10000/default;auth=noSasl' ----- - -Once you run this command, a prompt appears where you can run queries. Use the `show tables;` query to view the list of tables: - -[source,terminal] ----- -$ 0: jdbc:hive2://127.0.0.1:10000/default> show tables from metering; ----- - -.Example output -[source,terminal] ----- -+----------------------------------------------------+ -| tab_name | -+----------------------------------------------------+ -| datasource_your_namespace_cluster_cpu_capacity_raw | -| datasource_your_namespace_cluster_cpu_usage_raw | -| datasource_your_namespace_cluster_memory_capacity_raw | -| datasource_your_namespace_cluster_memory_usage_raw | -| datasource_your_namespace_node_allocatable_cpu_cores | -| datasource_your_namespace_node_allocatable_memory_bytes | -| datasource_your_namespace_node_capacity_cpu_cores | -| datasource_your_namespace_node_capacity_memory_bytes | -| datasource_your_namespace_node_cpu_allocatable_raw | -| datasource_your_namespace_node_cpu_capacity_raw | -| datasource_your_namespace_node_memory_allocatable_raw | -| datasource_your_namespace_node_memory_capacity_raw | -| datasource_your_namespace_persistentvolumeclaim_capacity_bytes | -| datasource_your_namespace_persistentvolumeclaim_capacity_raw | -| datasource_your_namespace_persistentvolumeclaim_phase | -| datasource_your_namespace_persistentvolumeclaim_phase_raw | -| datasource_your_namespace_persistentvolumeclaim_request_bytes | -| datasource_your_namespace_persistentvolumeclaim_request_raw | -| datasource_your_namespace_persistentvolumeclaim_usage_bytes | -| datasource_your_namespace_persistentvolumeclaim_usage_raw | -| datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw | -| datasource_your_namespace_pod_cpu_request_raw | -| datasource_your_namespace_pod_cpu_usage_raw | -| datasource_your_namespace_pod_limit_cpu_cores | -| datasource_your_namespace_pod_limit_memory_bytes | -| datasource_your_namespace_pod_memory_request_raw | -| datasource_your_namespace_pod_memory_usage_raw | -| datasource_your_namespace_pod_persistentvolumeclaim_request_info | -| datasource_your_namespace_pod_request_cpu_cores | -| datasource_your_namespace_pod_request_memory_bytes | -| datasource_your_namespace_pod_usage_cpu_cores | -| datasource_your_namespace_pod_usage_memory_bytes | -+----------------------------------------------------+ -32 rows selected (13.101 seconds) -0: jdbc:hive2://127.0.0.1:10000/default> ----- - -[id="metering-port-forward-hive-web-ui_{context}"] -== Port-forward to the Hive web UI -Run the following command to port-forward to the Hive web UI: - -[source,terminal] ----- -$ oc -n openshift-metering port-forward hive-server-0 10002 ----- - -You can now open http://127.0.0.1:10002 in your browser window to view the Hive web interface. - -[id="metering-port-forward-hdfs_{context}"] -== Port-forward to HDFS -Run the following command to port-forward to the HDFS namenode: - -[source,terminal] ----- -$ oc -n openshift-metering port-forward hdfs-namenode-0 9870 ----- - -You can now open http://127.0.0.1:9870 in your browser window to view the HDFS web interface. - -Run the following command to port-forward to the first HDFS datanode: - -[source,terminal] ----- -$ oc -n openshift-metering port-forward hdfs-datanode-0 9864 <1> ----- -<1> To check other datanodes, replace `hdfs-datanode-0` with the pod you want to view information on. - -[id="metering-ansible-operator_{context}"] -== Metering Ansible Operator -Metering uses the Ansible Operator to watch and reconcile resources in a cluster environment. When debugging a failed metering installation, it can be helpful to view the Ansible logs or status of your `MeteringConfig` custom resource. - -[id="metering-accessing-ansible-logs_{context}"] -=== Accessing Ansible logs -In the default installation, the Metering Operator is deployed as a pod. In this case, you can check the logs of the Ansible container within this pod: - -[source,terminal] ----- -$ oc -n openshift-metering logs $(oc -n openshift-metering get pods -l app=metering-operator -o name | cut -d/ -f2) -c ansible ----- - -Alternatively, you can view the logs of the Operator container (replace `-c ansible` with `-c operator`) for condensed output. - -[id="metering-checking-meteringconfig-status_{context}"] -=== Checking the MeteringConfig Status -It can be helpful to view the `.status` field of your `MeteringConfig` custom resource to debug any recent failures. The following command shows status messages with type `Invalid`: - -[source,terminal] ----- -$ oc -n openshift-metering get meteringconfig operator-metering -o=jsonpath='{.status.conditions[?(@.type=="Invalid")].message}' ----- -// $ oc -n openshift-metering get meteringconfig operator-metering -o json | jq '.status' - -[id="metering-checking-meteringconfig-events_{context}"] -=== Checking MeteringConfig Events -Check events that the Metering Operator is generating. This can be helpful during installation or upgrade to debug any resource failures. Sort events by the last timestamp: - -[source,terminal] ----- -$ oc -n openshift-metering get events --field-selector involvedObject.kind=MeteringConfig --sort-by='.lastTimestamp' ----- - -.Example output with latest changes in the MeteringConfig resources -[source,terminal] ----- -LAST SEEN TYPE REASON OBJECT MESSAGE -4m40s Normal Validating meteringconfig/operator-metering Validating the user-provided configuration -4m30s Normal Started meteringconfig/operator-metering Configuring storage for the metering-ansible-operator -4m26s Normal Started meteringconfig/operator-metering Configuring TLS for the metering-ansible-operator -3m58s Normal Started meteringconfig/operator-metering Configuring reporting for the metering-ansible-operator -3m53s Normal Reconciling meteringconfig/operator-metering Reconciling metering resources -3m47s Normal Reconciling meteringconfig/operator-metering Reconciling monitoring resources -3m41s Normal Reconciling meteringconfig/operator-metering Reconciling HDFS resources -3m23s Normal Reconciling meteringconfig/operator-metering Reconciling Hive resources -2m59s Normal Reconciling meteringconfig/operator-metering Reconciling Presto resources -2m35s Normal Reconciling meteringconfig/operator-metering Reconciling reporting-operator resources -2m14s Normal Reconciling meteringconfig/operator-metering Reconciling reporting resources ----- diff --git a/modules/metering-exposing-the-reporting-api.adoc b/modules/metering-exposing-the-reporting-api.adoc deleted file mode 100644 index a40ed2d312fc..000000000000 --- a/modules/metering-exposing-the-reporting-api.adoc +++ /dev/null @@ -1,159 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-reporting-operator.adoc - -[id="metering-exposing-the-reporting-api_{context}"] -= Exposing the reporting API - -On {product-title} the default metering installation automatically exposes a route, making the reporting API available. This provides the following features: - -* Automatic DNS -* Automatic TLS based on the cluster CA - -Also, the default installation makes it possible to use the {product-title} service for serving certificates to protect the reporting API with TLS. The {product-title} OAuth proxy is deployed as a sidecar container for the Reporting Operator, which protects the reporting API with authentication. - -[id="metering-openshift-authentication_{context}"] -== Using {product-title} Authentication - -By default, the reporting API is secured with TLS and authentication. This is done by configuring the Reporting Operator to deploy a pod containing both the Reporting Operator's container, and a sidecar container running {product-title} auth-proxy. - -To access the reporting API, the Metering Operator exposes a route. After that route has been installed, you can run the following command to get the route's hostname. - -[source,terminal] ----- -$ METERING_ROUTE_HOSTNAME=$(oc -n openshift-metering get routes metering -o json | jq -r '.status.ingress[].host') ----- - -Next, set up authentication using either a service account token or basic authentication with a username and password. - -[id="metering-authenticate-using-service-account_{context}"] -=== Authenticate using a service account token -With this method, you use the token in the Reporting Operator's service account, and pass that bearer token to the Authorization header in the following command: - -[source,terminal] ----- -$ TOKEN=$(oc -n openshift-metering serviceaccounts get-token reporting-operator) -curl -H "Authorization: Bearer $TOKEN" -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]" ----- - -Be sure to replace the `name=[Report Name]` and `format=[Format]` parameters in the URL above. The `format` parameter can be json, csv, or tabular. - -[id="metering-authenticate-using-username-password_{context}"] -=== Authenticate using a username and password - -Metering supports configuring basic authentication using a username and password combination, which is specified in the contents of an htpasswd file. By default, a secret containing empty htpasswd data is created. You can, however, configure the `reporting-operator.spec.authProxy.htpasswd.data` and `reporting-operator.spec.authProxy.htpasswd.createSecret` keys to use this method. - -Once you have specified the above in your `MeteringConfig` resource, you can run the following command: - -[source,terminal] ----- -$ curl -u testuser:password123 -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]" ----- - -Be sure to replace `testuser:password123` with a valid username and password combination. - -[id="metering-manually-configure-authentication_{context}"] -== Manually Configuring Authentication -To manually configure, or disable OAuth in the Reporting Operator, you must set `spec.tls.enabled: false` in your `MeteringConfig` resource. - -[WARNING] -==== -This also disables all TLS and authentication between the Reporting Operator, Presto, and Hive. You would need to manually configure these resources yourself. -==== - -Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the {product-title} auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API is not exposed directly, but instead is proxied to via the auth-proxy sidecar container. - -* `reporting-operator.spec.authProxy.enabled` -* `reporting-operator.spec.authProxy.cookie.createSecret` -* `reporting-operator.spec.authProxy.cookie.seed` - -You need to set `reporting-operator.spec.authProxy.enabled` and `reporting-operator.spec.authProxy.cookie.createSecret` to `true` and `reporting-operator.spec.authProxy.cookie.seed` to a 32-character random string. - -You can generate a 32-character random string using the following command. - -[source,terminal] ----- -$ openssl rand -base64 32 | head -c32; echo. ----- - -[id="metering-token-authentication_{context}"] -=== Token authentication - -When the following options are set to `true`, authentication using a bearer token is enabled for the reporting REST API. Bearer tokens can come from service accounts or users. - -* `reporting-operator.spec.authProxy.subjectAccessReview.enabled` -* `reporting-operator.spec.authProxy.delegateURLs.enabled` - -When authentication is enabled, the Bearer token used to query the reporting API of the user or service account must be granted access using one of the following roles: - -* report-exporter -* reporting-admin -* reporting-viewer -* metering-admin -* metering-viewer - -The Metering Operator is capable of creating role bindings for you, granting these permissions by specifying a list of subjects in the `spec.permissions` section. For an example, see the following `advanced-auth.yaml` example configuration. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - permissions: - # anyone in the "metering-admins" group can create, update, delete, etc any - # metering.openshift.io resources in the namespace. - # This also grants permissions to get query report results from the reporting REST API. - meteringAdmins: - - kind: Group - name: metering-admins - # Same as above except read only access and for the metering-viewers group. - meteringViewers: - - kind: Group - name: metering-viewers - # the default serviceaccount in the namespace "my-custom-ns" can: - # create, update, delete, etc reports. - # This also gives permissions query the results from the reporting REST API. - reportingAdmins: - - kind: ServiceAccount - name: default - namespace: my-custom-ns - # anyone in the group reporting-readers can get, list, watch reports, and - # query report results from the reporting REST API. - reportingViewers: - - kind: Group - name: reporting-readers - # anyone in the group cluster-admins can query report results - # from the reporting REST API. So can the user bob-from-accounting. - reportExporters: - - kind: Group - name: cluster-admins - - kind: User - name: bob-from-accounting - - reporting-operator: - spec: - authProxy: - # htpasswd.data can contain htpasswd file contents for allowing auth - # using a static list of usernames and their password hashes. - # - # username is 'testuser' password is 'password123' - # generated htpasswdData using: `htpasswd -nb -s testuser password123` - # htpasswd: - # data: | - # testuser:{SHA}y/2sYAj5yrQIN4TL0YdPdmGNKpc= - # - # change REPLACEME to the output of your htpasswd command - htpasswd: - data: | - REPLACEME ----- - -Alternatively, you can use any role which has rules granting `get` permissions to `reports/export`. This means `get` access to the `export` sub-resource of the `Report` resources in the namespace of the Reporting Operator. For example: `admin` and `cluster-admin`. - -By default, the Reporting Operator and Metering Operator service accounts both have these permissions, and their tokens can be used for authentication. - -[id="metering-basic-authentication_{context}"] -=== Basic authentication with a username and password -For basic authentication you can supply a username and password in the `reporting-operator.spec.authProxy.htpasswd.data` field. The username and password must be the same format as those found in an htpasswd file. When set, you can use HTTP basic authentication to provide your username and password that has a corresponding entry in the `htpasswdData` contents. diff --git a/modules/metering-install-operator.adoc b/modules/metering-install-operator.adoc deleted file mode 100644 index daeee992cc76..000000000000 --- a/modules/metering-install-operator.adoc +++ /dev/null @@ -1,133 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-install-operator_{context}"] -= Installing the Metering Operator - -You can install metering by deploying the Metering Operator. The Metering Operator creates and manages the components of the metering stack. - -[NOTE] -==== -You cannot create a project starting with `openshift-` using the web console or by using the `oc new-project` command in the CLI. -==== - -[NOTE] -==== -If the Metering Operator is installed using a namespace other than `openshift-metering`, the metering reports are only viewable using the CLI. It is strongly suggested throughout the installation steps to use the `openshift-metering` namespace. -==== - -[id="metering-install-web-console_{context}"] -== Installing metering using the web console -You can use the {product-title} web console to install the Metering Operator. - -.Procedure - -. Create a namespace object YAML file for the Metering Operator with the `oc create -f .yaml` command. You must use the CLI to create the namespace. For example, `metering-namespace.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-metering <1> - annotations: - openshift.io/node-selector: "" <2> - labels: - openshift.io/cluster-monitoring: "true" ----- -<1> It is strongly recommended to deploy metering in the `openshift-metering` namespace. -<2> Include this annotation before configuring specific node selectors for the operand pods. - -. In the {product-title} web console, click *Operators* -> *OperatorHub*. Filter for `metering` to find the Metering Operator. - -. Click the *Metering* card, review the package description, and then click *Install*. -. Select an *Update Channel*, *Installation Mode*, and *Approval Strategy*. -. Click *Install*. - -. Verify that the Metering Operator is installed by switching to the *Operators* -> *Installed Operators* page. The Metering Operator has a *Status* of *Succeeded* when the installation is complete. -+ -[NOTE] -==== -It might take several minutes for the Metering Operator to appear. -==== - -. Click *Metering* on the *Installed Operators* page for Operator *Details*. From the *Details* page you can create different resources related to metering. - -To complete the metering installation, create a `MeteringConfig` resource to configure metering and install the components of the metering stack. - -[id="metering-install-cli_{context}"] -== Installing metering using the CLI - -You can use the {product-title} CLI to install the Metering Operator. - -.Procedure - -. Create a `Namespace` object YAML file for the Metering Operator. You must use the CLI to create the namespace. For example, `metering-namespace.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-metering <1> - annotations: - openshift.io/node-selector: "" <2> - labels: - openshift.io/cluster-monitoring: "true" ----- -<1> It is strongly recommended to deploy metering in the `openshift-metering` namespace. -<2> Include this annotation before configuring specific node selectors for the operand pods. - -. Create the `Namespace` object: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- -+ -For example: -+ -[source,terminal] ----- -$ oc create -f openshift-metering.yaml ----- - -. Create the `OperatorGroup` object YAML file. For example, `metering-og`: -+ -[source,yaml] ----- -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: openshift-metering <1> - namespace: openshift-metering <2> -spec: - targetNamespaces: - - openshift-metering ----- -<1> The name is arbitrary. -<2> Specify the `openshift-metering` namespace. - -. Create a `Subscription` object YAML file to subscribe a namespace to the Metering Operator. This object targets the most recently released version in the `redhat-operators` catalog source. For example, `metering-sub.yaml`: -+ -[source,yaml, subs="attributes+"] ----- -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: metering-ocp <1> - namespace: openshift-metering <2> -spec: - channel: "{product-version}" <3> - source: "redhat-operators" <4> - sourceNamespace: "openshift-marketplace" - name: "metering-ocp" - installPlanApproval: "Automatic" <5> ----- -<1> The name is arbitrary. -<2> You must specify the `openshift-metering` namespace. -<3> Specify {product-version} as the channel. -<4> Specify the `redhat-operators` catalog source, which contains the `metering-ocp` package manifests. If your {product-title} is installed on a restricted network, also known as a disconnected cluster, specify the name of the `CatalogSource` object you created when you configured the Operator LifeCycle Manager (OLM). -<5> Specify "Automatic" install plan approval. diff --git a/modules/metering-install-prerequisites.adoc b/modules/metering-install-prerequisites.adoc deleted file mode 100644 index 293f9f55b897..000000000000 --- a/modules/metering-install-prerequisites.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adoc - -[id="metering-install-prerequisites_{context}"] -= Prerequisites - -Metering requires the following components: - -* A `StorageClass` resource for dynamic volume provisioning. Metering supports a number of different storage solutions. -* 4GB memory and 4 CPU cores available cluster capacity and at least one node with 2 CPU cores and 2GB memory capacity available. -* The minimum resources needed for the largest single pod installed by metering are 2GB of memory and 2 CPU cores. -** Memory and CPU consumption may often be lower, but will spike when running reports, or collecting data for larger clusters. diff --git a/modules/metering-install-verify.adoc b/modules/metering-install-verify.adoc deleted file mode 100644 index 9b575dfa7962..000000000000 --- a/modules/metering-install-verify.adoc +++ /dev/null @@ -1,95 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adc - -[id="metering-install-verify_{context}"] -= Verifying the metering installation - -You can verify the metering installation by performing any of the following checks: - -* Check the Metering Operator `ClusterServiceVersion` (CSV) resource for the metering version. This can be done through either the web console or CLI. -+ --- -.Procedure (UI) - . Navigate to *Operators* -> *Installed Operators* in the `openshift-metering` namespace. - . Click *Metering Operator*. - . Click *Subscription* for *Subscription Details*. - . Check the *Installed Version*. - -.Procedure (CLI) -* Check the Metering Operator CSV in the `openshift-metering` namespace: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering get csv ----- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -NAME DISPLAY VERSION REPLACES PHASE -elasticsearch-operator.{product-version}.0-202006231303.p0 OpenShift Elasticsearch Operator {product-version}.0-202006231303.p0 Succeeded -metering-operator.v{product-version}.0 Metering {product-version}.0 Succeeded ----- --- - -* Check that all required pods in the `openshift-metering` namespace are created. This can be done through either the web console or CLI. -+ --- -[NOTE] -==== -Many pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator installation. -==== - -.Procedure (UI) -* Navigate to *Workloads* -> *Pods* in the metering namespace and verify that pods are being created. This can take several minutes after installing the metering stack. - -.Procedure (CLI) -* Check that all required pods in the `openshift-metering` namespace are created: -+ -[source,terminal] ----- -$ oc -n openshift-metering get pods ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -hive-metastore-0 2/2 Running 0 3m28s -hive-server-0 3/3 Running 0 3m28s -metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s -presto-coordinator-0 2/2 Running 0 3m9s -reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s ----- --- - -* Verify that the `ReportDataSource` resources are beginning to import data, indicated by a valid timestamp in the `EARLIEST METRIC` column. This might take several minutes. Filter out the "-raw" `ReportDataSource` resources, which do not import data: -+ -[source,terminal] ----- -$ oc get reportdatasources -n openshift-metering | grep -v raw ----- -+ -.Example output -[source,terminal] ----- -NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE -node-allocatable-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T18:54:45Z 9m50s -node-allocatable-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T18:54:45Z 9m50s -node-capacity-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:39Z 9m50s -node-capacity-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T18:54:44Z 9m50s -persistentvolumeclaim-capacity-bytes 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:43Z 9m50s -persistentvolumeclaim-phase 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:28Z 9m50s -persistentvolumeclaim-request-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:34Z 9m50s -persistentvolumeclaim-usage-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:36Z 9m49s -pod-limit-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:26Z 9m49s -pod-limit-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:30Z 9m49s -pod-persistentvolumeclaim-request-info 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:37Z 9m49s -pod-request-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T18:54:24Z 9m49s -pod-request-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:32Z 9m49s -pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T18:54:10Z 9m49s -pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s ----- - -After all pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster. diff --git a/modules/metering-overview.adoc b/modules/metering-overview.adoc deleted file mode 100644 index abb20ed8c452..000000000000 --- a/modules/metering-overview.adoc +++ /dev/null @@ -1,33 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adoc -// * metering/metering-using-metering.adoc - -[id="metering-overview_{context}"] -= Metering overview - -Metering is a general purpose data analysis tool that enables you to write reports to process data from different data sources. As a cluster administrator, you can use metering to analyze what is happening in your cluster. You can either write your own, or use predefined SQL queries to define how you want to process data from the different data sources you have available. - -Metering focuses primarily on in-cluster metric data using Prometheus as a default data source, enabling users of metering to do reporting on pods, namespaces, and most other Kubernetes resources. - -You can install metering on {product-title} 4.x clusters and above. - -[id="metering-resources_{context}"] -== Metering resources - -Metering has many resources which can be used to manage the deployment and installation of metering, as well as the reporting functionality metering provides. - -Metering is managed using the following custom resource definitions (CRDs): - -[cols="1,7"] -|=== - -|*MeteringConfig* |Configures the metering stack for deployment. Contains customizations and configuration options to control each component that makes up the metering stack. - -|*Report* |Controls what query to use, when, and how often the query should be run, and where to store the results. - -|*ReportQuery* |Contains the SQL queries used to perform analysis on the data contained within `ReportDataSource` resources. - -|*ReportDataSource* |Controls the data available to `ReportQuery` and `Report` resources. Allows configuring access to different databases for use within metering. - -|=== diff --git a/modules/metering-prometheus-connection.adoc b/modules/metering-prometheus-connection.adoc deleted file mode 100644 index b713fd4ff17d..000000000000 --- a/modules/metering-prometheus-connection.adoc +++ /dev/null @@ -1,55 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-reporting-operator.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-prometheus-connection_{context}"] -= Securing a Prometheus connection - -When you install metering on {product-title}, Prometheus is available at https://prometheus-k8s.openshift-monitoring.svc:9091/. - -To secure the connection to Prometheus, the default metering installation uses the {product-title} certificate authority (CA). If your Prometheus instance uses a different CA, you can inject the CA through a config map. You can also configure the Reporting Operator to use a specified bearer token to authenticate with Prometheus. - -.Procedure - -* Inject the CA that your Prometheus instance uses through a config map. For example: -+ -[source,yaml] ----- -spec: - reporting-operator: - spec: - config: - prometheus: - certificateAuthority: - useServiceAccountCA: false - configMap: - enabled: true - create: true - name: reporting-operator-certificate-authority-config - filename: "internal-ca.crt" - value: | - -----BEGIN CERTIFICATE----- - (snip) - -----END CERTIFICATE----- ----- -+ -Alternatively, to use the system certificate authorities for publicly valid certificates, set both `useServiceAccountCA` and `configMap.enabled` to `false`. - -* Specify a bearer token to authenticate with Prometheus. For example: - -[source,yaml] ----- -spec: - reporting-operator: - spec: - config: - prometheus: - metricsImporter: - auth: - useServiceAccountToken: false - tokenSecret: - enabled: true - create: true - value: "abc-123" ----- diff --git a/modules/metering-reports.adoc b/modules/metering-reports.adoc deleted file mode 100644 index e9cb4025d9e7..000000000000 --- a/modules/metering-reports.adoc +++ /dev/null @@ -1,381 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-about-reports.adoc -[id="metering-reports_{context}"] -= Reports - -The `Report` custom resource is used to manage the execution and status of reports. Metering produces reports derived from usage data sources, which can be used in further analysis and filtering. A single `Report` resource represents a job that manages a database table and updates it with new information according to a schedule. The report exposes the data in that table via the Reporting Operator HTTP API. - -Reports with a `spec.schedule` field set are always running and track what time periods it has collected data for. This ensures that if metering is shutdown or unavailable for an extended period of time, it backfills the data starting where it left off. If the schedule is unset, then the report runs once for the time specified by the `reportingStart` and `reportingEnd`. By default, reports wait for `ReportDataSource` resources to have fully imported any data covered in the reporting period. If the report has a schedule, it waits to run until the data in the period currently being processed has finished importing. - -[id="metering-example-report-with-schedule_{context}"] -== Example report with a schedule - -The following example `Report` object contains information on every pod's CPU requests, and runs every hour, adding the last hours worth of data each time it runs. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - reportingStart: "2021-07-01T00:00:00Z" - schedule: - period: "hourly" - hourly: - minute: 0 - second: 0 ----- - -[id="metering-example-report-without-schedule_{context}"] -== Example report without a schedule (run-once) - -The following example `Report` object contains information on every pod's CPU requests for all of July. After completion, it does not run again. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - reportingStart: "2021-07-01T00:00:00Z" - reportingEnd: "2021-07-31T00:00:00Z" ----- - -[id="metering-query_{context}"] -== query - -The `query` field names the `ReportQuery` resource used to generate the report. The report query controls the schema of the report as well as how the results are processed. - -*`query` is a required field.* - -Use the following command to list available `ReportQuery` resources: - -[source,terminal] ----- -$ oc -n openshift-metering get reportqueries ----- - -.Example output -[source,terminal] ----- -NAME AGE -cluster-cpu-capacity 23m -cluster-cpu-capacity-raw 23m -cluster-cpu-usage 23m -cluster-cpu-usage-raw 23m -cluster-cpu-utilization 23m -cluster-memory-capacity 23m -cluster-memory-capacity-raw 23m -cluster-memory-usage 23m -cluster-memory-usage-raw 23m -cluster-memory-utilization 23m -cluster-persistentvolumeclaim-request 23m -namespace-cpu-request 23m -namespace-cpu-usage 23m -namespace-cpu-utilization 23m -namespace-memory-request 23m -namespace-memory-usage 23m -namespace-memory-utilization 23m -namespace-persistentvolumeclaim-request 23m -namespace-persistentvolumeclaim-usage 23m -node-cpu-allocatable 23m -node-cpu-allocatable-raw 23m -node-cpu-capacity 23m -node-cpu-capacity-raw 23m -node-cpu-utilization 23m -node-memory-allocatable 23m -node-memory-allocatable-raw 23m -node-memory-capacity 23m -node-memory-capacity-raw 23m -node-memory-utilization 23m -persistentvolumeclaim-capacity 23m -persistentvolumeclaim-capacity-raw 23m -persistentvolumeclaim-phase-raw 23m -persistentvolumeclaim-request 23m -persistentvolumeclaim-request-raw 23m -persistentvolumeclaim-usage 23m -persistentvolumeclaim-usage-raw 23m -persistentvolumeclaim-usage-with-phase-raw 23m -pod-cpu-request 23m -pod-cpu-request-raw 23m -pod-cpu-usage 23m -pod-cpu-usage-raw 23m -pod-memory-request 23m -pod-memory-request-raw 23m -pod-memory-usage 23m -pod-memory-usage-raw 23m ----- - -Report queries with the `-raw` suffix are used by other `ReportQuery` resources to build more complex queries, and should not be used directly for reports. - -`namespace-` prefixed queries aggregate pod CPU and memory requests by namespace, providing a list of namespaces and their overall usage based on resource requests. - -`pod-` prefixed queries are similar to `namespace-` prefixed queries but aggregate information by pod rather than namespace. These queries include the pod's namespace and node. - -`node-` prefixed queries return information about each node's total available resources. - -`aws-` prefixed queries are specific to AWS. Queries suffixed with `-aws` return the same data as queries of the same name without the suffix, and correlate usage with the EC2 billing data. - -The `aws-ec2-billing-data` report is used by other queries, and should not be used as a standalone report. The `aws-ec2-cluster-cost` report provides a total cost based on the nodes included in the cluster, and the sum of their costs for the time period being reported on. - -Use the following command to get the `ReportQuery` resource as YAML, and check the `spec.columns` field. For example, run: - -[source,terminal] ----- -$ oc -n openshift-metering get reportqueries namespace-memory-request -o yaml ----- - -.Example output -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: ReportQuery -metadata: - name: namespace-memory-request - labels: - operator-metering: "true" -spec: - columns: - - name: period_start - type: timestamp - unit: date - - name: period_end - type: timestamp - unit: date - - name: namespace - type: varchar - unit: kubernetes_namespace - - name: pod_request_memory_byte_seconds - type: double - unit: byte_seconds ----- - -[id="metering-schedule_{context}"] -== schedule - -The `spec.schedule` configuration block defines when the report runs. The main fields in the `schedule` section are `period`, and then depending on the value of `period`, the fields `hourly`, `daily`, `weekly`, and `monthly` allow you to fine-tune when the report runs. - -For example, if `period` is set to `weekly`, you can add a `weekly` field to the `spec.schedule` block. The following example will run once a week on Wednesday, at 1 PM (hour 13 in the day). - -[source,yaml] ----- -... - schedule: - period: "weekly" - weekly: - dayOfWeek: "wednesday" - hour: 13 -... ----- - -[id="metering-period_{context}"] -=== period - -Valid values of `schedule.period` are listed below, and the options available to set for a given period are also listed. - -* `hourly` -** `minute` -** `second` -* `daily` -** `hour` -** `minute` -** `second` -* `weekly` -** `dayOfWeek` -** `hour` -** `minute` -** `second` -* `monthly` -** `dayOfMonth` -** `hour` -** `minute` -** `second` -* `cron` -** `expression` - -Generally, the `hour`, `minute`, `second` fields control when in the day the report runs, and `dayOfWeek`/`dayOfMonth` control what day of the week, or day of month the report runs on, if it is a weekly or monthly report period. - -For each of these fields, there is a range of valid values: - -* `hour` is an integer value between 0-23. -* `minute` is an integer value between 0-59. -* `second` is an integer value between 0-59. -* `dayOfWeek` is a string value that expects the day of the week (spelled out). -* `dayOfMonth` is an integer value between 1-31. - -For cron periods, normal cron expressions are valid: - -* `expression: "*/5 * * * *"` - -[id="metering-reportingStart_{context}"] -== reportingStart - -To support running a report against existing data, you can set the `spec.reportingStart` field to a link:https://tools.ietf.org/html/rfc3339#section-5.8[RFC3339 timestamp] to tell the report to run according to its `schedule` starting from `reportingStart` rather than the current time. - -[NOTE] -==== -Setting the `spec.reportingStart` field to a specific time will result in the Reporting Operator running many queries in succession for each interval in the schedule that is between the `reportingStart` time and the current time. This could be thousands of queries if the period is less than daily and the `reportingStart` is more than a few months back. If `reportingStart` is left unset, the report will run at the next full `reportingPeriod` after the time the report is created. -==== - -As an example of how to use this field, if you had data already collected dating back to January 1st, 2019 that you want to include in your `Report` object, you can create a report with the following values: - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - schedule: - period: "hourly" - reportingStart: "2021-01-01T00:00:00Z" ----- - -[id="metering-reportingEnd_{context}"] -== reportingEnd - -To configure a report to only run until a specified time, you can set the `spec.reportingEnd` field to an link:https://tools.ietf.org/html/rfc3339#section-5.8[RFC3339 timestamp]. The value of this field will cause the report to stop running on its schedule after it has finished generating reporting data for the period covered from its start time until `reportingEnd`. - -Because a schedule will most likely not align with the `reportingEnd`, the last period in the schedule will be shortened to end at the specified `reportingEnd` time. If left unset, then the report will run forever, or until a `reportingEnd` is set on the report. - -For example, if you want to create a report that runs once a week for the month of July: - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - schedule: - period: "weekly" - reportingStart: "2021-07-01T00:00:00Z" - reportingEnd: "2021-07-31T00:00:00Z" ----- - -[id="metering-expiration_{context}"] -== expiration - -Add the `expiration` field to set a retention period on a scheduled metering report. You can avoid manually removing the report by setting the `expiration` duration value. The retention period is equal to the `Report` object `creationDate` plus the `expiration` duration. The report is removed from the cluster at the end of the retention period if no other reports or report queries depend on the expiring report. Deleting the report from the cluster can take several minutes. - -[NOTE] -==== -Setting the `expiration` field is not recommended for roll-up or aggregated reports. If a report is depended upon by other reports or report queries, then the report is not removed at the end of the retention period. You can view the `report-operator` logs at debug level for the timing output around a report retention decision. -==== - -For example, the following scheduled report is deleted 30 minutes after the `creationDate` of the report: - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - schedule: - period: "weekly" - reportingStart: "2021-07-01T00:00:00Z" - expiration: "30m" <1> ----- -<1> Valid time units for the `expiration` duration are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. - -[NOTE] -==== -The `expiration` retention period for a `Report` object is not precise and works on the order of several minutes, not nanoseconds. -==== - -[id="metering-runImmediately_{context}"] -== runImmediately - -When `runImmediately` is set to `true`, the report runs immediately. This behavior ensures that the report is immediately processed and queued without requiring additional scheduling parameters. - -[NOTE] -==== -When `runImmediately` is set to `true`, you must set a `reportingEnd` and `reportingStart` value. -==== - -[id="metering-inputs_{context}"] -== inputs - -The `spec.inputs` field of a `Report` object can be used to override or set values defined in a `ReportQuery` resource's `spec.inputs` field. - -`spec.inputs` is a list of name-value pairs: - -[source,yaml] ----- -spec: - inputs: - - name: "NamespaceCPUUsageReportName" <1> - value: "namespace-cpu-usage-hourly" <2> ----- - -<1> The `name` of an input must exist in the ReportQuery's `inputs` list. -<2> The `value` of the input must be the correct type for the input's `type`. - -// TODO(chance): include modules/metering-reportquery-inputs.adoc module - -[id="metering-roll-up-reports_{context}"] -== Roll-up reports - -Report data is stored in the database much like metrics themselves, and therefore, can be used in aggregated or roll-up reports. A simple use case for a roll-up report is to spread the time required to produce a report over a longer period of time. This is instead of requiring a monthly report to query and add all data over an entire month. For example, the task can be split into daily reports that each run over 1/30 of the data. - -A custom roll-up report requires a custom report query. The `ReportQuery` resource template processor provides a `reportTableName` function that can get the necessary table name from a `Report` object's `metadata.name`. - -Below is a snippet taken from a built-in query: - -.pod-cpu.yaml -[source,yaml] ----- -spec: -... - inputs: - - name: ReportingStart - type: time - - name: ReportingEnd - type: time - - name: NamespaceCPUUsageReportName - type: Report - - name: PodCpuUsageRawDataSourceName - type: ReportDataSource - default: pod-cpu-usage-raw -... - - query: | -... - {|- if .Report.Inputs.NamespaceCPUUsageReportName |} - namespace, - sum(pod_usage_cpu_core_seconds) as pod_usage_cpu_core_seconds - FROM {| .Report.Inputs.NamespaceCPUUsageReportName | reportTableName |} -... ----- - -.Example `aggregated-report.yaml` roll-up report -[source,yaml] ----- -spec: - query: "namespace-cpu-usage" - inputs: - - name: "NamespaceCPUUsageReportName" - value: "namespace-cpu-usage-hourly" ----- - -// TODO(chance): replace the comment below with an include on the modules/metering-rollup-report.adoc -// For more information on setting up a roll-up report, see the [roll-up report guide](rollup-reports.md). - -[id="metering-report-status_{context}"] -=== Report status - -The execution of a scheduled report can be tracked using its status field. Any errors occurring during the preparation of a report will be recorded here. - -The `status` field of a `Report` object currently has two fields: - -* `conditions`: Conditions is a list of conditions, each of which have a `type`, `status`, `reason`, and `message` field. Possible values of a condition's `type` field are `Running` and `Failure`, indicating the current state of the scheduled report. The `reason` indicates why its `condition` is in its current state with the `status` being either `true`, `false` or, `unknown`. The `message` provides a human readable indicating why the condition is in the current state. For detailed information on the `reason` values, see link:https://github.com/operator-framework/operator-metering/blob/master/pkg/apis/metering/v1/util/report_util.go#L10[`pkg/apis/metering/v1/util/report_util.go`]. -* `lastReportTime`: Indicates the time metering has collected data up to. diff --git a/modules/metering-store-data-in-azure.adoc b/modules/metering-store-data-in-azure.adoc deleted file mode 100644 index a193836d22ba..000000000000 --- a/modules/metering-store-data-in-azure.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-azure_{context}"] -= Storing data in Microsoft Azure - -To store data in Azure blob storage, you must use an existing container. - -.Procedure - -. Edit the `spec.storage` section in the `azure-blob-storage.yaml` file: -+ -.Example `azure-blob-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "azure" - azure: - container: "bucket1" <1> - secretName: "my-azure-secret" <2> - rootDirectory: "/testDir" <3> ----- -<1> Specify the container name. -<2> Specify a secret in the metering namespace. See the example `Secret` object below for more details. -<3> Optional: Specify the directory where you would like to store your data. - -. Use the following `Secret` object as a template: -+ -.Example Azure `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-azure-secret -data: - azure-storage-account-name: "dGVzdAo=" - azure-secret-access-key: "c2VjcmV0Cg==" ----- - -. Create the secret: -+ -[source,terminal] ----- -$ oc create secret -n openshift-metering generic my-azure-secret \ - --from-literal=azure-storage-account-name=my-storage-account-name \ - --from-literal=azure-secret-access-key=my-secret-key ----- diff --git a/modules/metering-store-data-in-gcp.adoc b/modules/metering-store-data-in-gcp.adoc deleted file mode 100644 index 8a39f891ab18..000000000000 --- a/modules/metering-store-data-in-gcp.adoc +++ /dev/null @@ -1,53 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-gcp_{context}"] -= Storing data in Google Cloud Storage - -To store your data in Google Cloud Storage, you must use an existing bucket. - -.Procedure - -. Edit the `spec.storage` section in the `gcs-storage.yaml` file: -+ -.Example `gcs-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "gcs" - gcs: - bucket: "metering-gcs/test1" <1> - secretName: "my-gcs-secret" <2> ----- -<1> Specify the name of the bucket. You can optionally specify the directory within the bucket where you would like to store your data. -<2> Specify a secret in the metering namespace. See the example `Secret` object below for more details. - -. Use the following `Secret` object as a template: -+ -.Example Google Cloud Storage `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-gcs-secret -data: - gcs-service-account.json: "c2VjcmV0Cg==" ----- - -. Create the secret: -+ -[source,terminal] ----- -$ oc create secret -n openshift-metering generic my-gcs-secret \ - --from-file gcs-service-account.json=/path/to/my/service-account-key.json ----- diff --git a/modules/metering-store-data-in-s3-compatible.adoc b/modules/metering-store-data-in-s3-compatible.adoc deleted file mode 100644 index 1484c0281d36..000000000000 --- a/modules/metering-store-data-in-s3-compatible.adoc +++ /dev/null @@ -1,48 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-s3-compatible_{context}"] -= Storing data in S3-compatible storage - -You can use S3-compatible storage such as Noobaa. - -.Procedure - -. Edit the `spec.storage` section in the `s3-compatible-storage.yaml` file: -+ -.Example `s3-compatible-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "s3Compatible" - s3Compatible: - bucket: "bucketname" <1> - endpoint: "http://example:port-number" <2> - secretName: "my-aws-secret" <3> ----- -<1> Specify the name of your S3-compatible bucket. -<2> Specify the endpoint for your storage. -<3> The name of a secret in the metering namespace containing the AWS credentials in the `data.aws-access-key-id` and `data.aws-secret-access-key` fields. See the example `Secret` object below for more details. - -. Use the following `Secret` object as a template: -+ -.Example S3-compatible `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-aws-secret -data: - aws-access-key-id: "dGVzdAo=" - aws-secret-access-key: "c2VjcmV0Cg==" ----- diff --git a/modules/metering-store-data-in-s3.adoc b/modules/metering-store-data-in-s3.adoc deleted file mode 100644 index 41199e170c37..000000000000 --- a/modules/metering-store-data-in-s3.adoc +++ /dev/null @@ -1,136 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-s3_{context}"] -= Storing data in Amazon S3 - -Metering can use an existing Amazon S3 bucket or create a bucket for storage. - -[NOTE] -==== -Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data. -==== - -.Procedure - -. Edit the `spec.storage` section in the `s3-storage.yaml` file: -+ -.Example `s3-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "s3" - s3: - bucket: "bucketname/path/" <1> - region: "us-west-1" <2> - secretName: "my-aws-secret" <3> - # Set to false if you want to provide an existing bucket, instead of - # having metering create the bucket on your behalf. - createBucket: true <4> ----- -<1> Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket. -<2> Specify the region of your bucket. -<3> The name of a secret in the metering namespace containing the AWS credentials in the `data.aws-access-key-id` and `data.aws-secret-access-key` fields. See the example `Secret` object below for more details. -<4> Set this field to `false` if you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that have `CreateBucket` permissions. - -. Use the following `Secret` object as a template: -+ -.Example AWS `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-aws-secret -data: - aws-access-key-id: "dGVzdAo=" - aws-secret-access-key: "c2VjcmV0Cg==" ----- -+ -[NOTE] -==== -The values of the `aws-access-key-id` and `aws-secret-access-key` must be base64 encoded. -==== - -. Create the secret: -+ -[source,terminal] ----- -$ oc create secret -n openshift-metering generic my-aws-secret \ - --from-literal=aws-access-key-id=my-access-key \ - --from-literal=aws-secret-access-key=my-secret-key ----- -+ -[NOTE] -==== -This command automatically base64 encodes your `aws-access-key-id` and `aws-secret-access-key` values. -==== - -The `aws-access-key-id` and `aws-secret-access-key` credentials must have read and write access to the bucket. The following `aws/read-write.json` file shows an IAM policy that grants the required permissions: - -.Example `aws/read-write.json` file -[source,json] ----- -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "1", - "Effect": "Allow", - "Action": [ - "s3:AbortMultipartUpload", - "s3:DeleteObject", - "s3:GetObject", - "s3:HeadBucket", - "s3:ListBucket", - "s3:ListMultipartUploadParts", - "s3:PutObject" - ], - "Resource": [ - "arn:aws:s3:::operator-metering-data/*", - "arn:aws:s3:::operator-metering-data" - ] - } - ] -} ----- - -If `spec.storage.hive.s3.createBucket` is set to `true` or unset in your `s3-storage.yaml` file, then you should use the `aws/read-write-create.json` file that contains permissions for creating and deleting buckets: - -.Example `aws/read-write-create.json` file -[source,json] ----- -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "1", - "Effect": "Allow", - "Action": [ - "s3:AbortMultipartUpload", - "s3:DeleteObject", - "s3:GetObject", - "s3:HeadBucket", - "s3:ListBucket", - "s3:CreateBucket", - "s3:DeleteBucket", - "s3:ListMultipartUploadParts", - "s3:PutObject" - ], - "Resource": [ - "arn:aws:s3:::operator-metering-data/*", - "arn:aws:s3:::operator-metering-data" - ] - } - ] -} ----- diff --git a/modules/metering-store-data-in-shared-volumes.adoc b/modules/metering-store-data-in-shared-volumes.adoc deleted file mode 100644 index a3a73285a5fe..000000000000 --- a/modules/metering-store-data-in-shared-volumes.adoc +++ /dev/null @@ -1,150 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-shared-volumes_{context}"] -= Storing data in shared volumes - -Metering does not configure storage by default. However, you can use any ReadWriteMany persistent volume (PV) or any storage class that provisions a ReadWriteMany PV for metering storage. - -[NOTE] -==== -NFS is not recommended to use in production. Using an NFS server on RHEL as a storage back end can fail to meet metering requirements and to provide the performance that is needed for the Metering Operator to work appropriately. - -Other NFS implementations on the marketplace might not have these issues, such as a Parallel Network File System (pNFS). pNFS is an NFS implementation with distributed and parallel capability. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against {product-title} core components. -==== - -.Procedure - -. Modify the `shared-storage.yaml` file to use a ReadWriteMany persistent volume for storage: -+ -.Example `shared-storage.yaml` file --- -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "sharedPVC" - sharedPVC: - claimName: "metering-nfs" <1> - # Uncomment the lines below to provision a new PVC using the specified storageClass. <2> - # createPVC: true - # storageClass: "my-nfs-storage-class" - # size: 5Gi ----- - -Select one of the configuration options below: - -<1> Set `storage.hive.sharedPVC.claimName` to the name of an existing ReadWriteMany persistent volume claim (PVC). This configuration is necessary if you do not have dynamic volume provisioning or want to have more control over how the persistent volume is created. - -<2> Set `storage.hive.sharedPVC.createPVC` to `true` and set the `storage.hive.sharedPVC.storageClass` to the name of a storage class with ReadWriteMany access mode. This configuration uses dynamic volume provisioning to create a volume automatically. --- - -. Create the following resource objects that are required to deploy an NFS server for metering. Use the `oc create -f .yaml` command to create the object YAML files. - -.. Configure a `PersistentVolume` resource object: -+ -.Example `nfs_persistentvolume.yaml` file -[source,yaml] ----- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: nfs - labels: - role: nfs-server -spec: - capacity: - storage: 5Gi - accessModes: - - ReadWriteMany - storageClassName: nfs-server <1> - nfs: - path: "/" - server: REPLACEME - persistentVolumeReclaimPolicy: Delete ----- -<1> Must exactly match the `[kind: StorageClass].metadata.name` field value. - -.. Configure a `Pod` resource object with the `nfs-server` role: -+ -.Example `nfs_server.yaml` file -[source,yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: nfs-server - labels: - role: nfs-server -spec: - containers: - - name: nfs-server - image: <1> - imagePullPolicy: IfNotPresent - ports: - - name: nfs - containerPort: 2049 - securityContext: - privileged: true - volumeMounts: - - mountPath: "/mnt/data" - name: local - volumes: - - name: local - emptyDir: {} ----- -<1> Install your NFS server image. - -.. Configure a `Service` resource object with the `nfs-server` role: -+ -.Example `nfs_service.yaml` file -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: nfs-service - labels: - role: nfs-server -spec: - ports: - - name: 2049-tcp - port: 2049 - protocol: TCP - targetPort: 2049 - selector: - role: nfs-server - sessionAffinity: None - type: ClusterIP ----- - -.. Configure a `StorageClass` resource object: -+ -.Example `nfs_storageclass.yaml` file -[source,yaml] ----- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: nfs-server <1> -provisioner: example.com/nfs -parameters: - archiveOnDelete: "false" -reclaimPolicy: Delete -volumeBindingMode: Immediate ----- -<1> Must exactly match the `[kind: PersistentVolume].spec.storageClassName` field value. - - -[WARNING] -==== -Configuration of your NFS storage, and any relevant resource objects, will vary depending on the NFS server image that you use for metering storage. -==== diff --git a/modules/metering-troubleshooting.adoc b/modules/metering-troubleshooting.adoc deleted file mode 100644 index e0a857ced20f..000000000000 --- a/modules/metering-troubleshooting.adoc +++ /dev/null @@ -1,195 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-troubleshooting-debugging.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-troubleshooting_{context}"] -= Troubleshooting metering - -A common issue with metering is pods failing to start. Pods might fail to start due to lack of resources or if they have a dependency on a resource that does not exist, such as a `StorageClass` or `Secret` resource. - -[id="metering-not-enough-compute-resources_{context}"] -== Not enough compute resources - -A common issue when installing or running metering is a lack of compute resources. As the cluster grows and more reports are created, the Reporting Operator pod requires more memory. If memory usage reaches the pod limit, the cluster considers the pod out of memory (OOM) and terminates it with an `OOMKilled` status. Ensure that metering is allocated the minimum resource requirements described in the installation prerequisites. - -[NOTE] -==== -The Metering Operator does not autoscale the Reporting Operator based on the load in the cluster. Therefore, CPU usage for the Reporting Operator pod does not increase as the cluster grows. -==== - -To determine if the issue is with resources or scheduling, follow the troubleshooting instructions included in the Kubernetes document https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting[Managing Compute Resources for Containers]. - -To troubleshoot issues due to a lack of compute resources, check the following within the `openshift-metering` namespace. - -.Prerequisites - -* You are currently in the `openshift-metering` namespace. Change to the `openshift-metering` namespace by running: -+ -[source,terminal] ----- -$ oc project openshift-metering ----- - -.Procedure - -. Check for metering `Report` resources that fail to complete and show the status of `ReportingPeriodUnmetDependencies`: -+ -[source,terminal] ----- -$ oc get reports ----- -+ -.Example output -[source,terminal] ----- -NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE -namespace-cpu-utilization-adhoc-10 namespace-cpu-utilization Finished 2020-10-31T00:00:00Z 2m38s -namespace-cpu-utilization-adhoc-11 namespace-cpu-utilization ReportingPeriodUnmetDependencies 2m23s -namespace-memory-utilization-202010 namespace-memory-utilization ReportingPeriodUnmetDependencies 26s -namespace-memory-utilization-202011 namespace-memory-utilization ReportingPeriodUnmetDependencies 14s ----- - -. Check the `ReportDataSource` resources where the `NEWEST METRIC` is less than the report end date: -+ -[source,terminal] ----- -$ oc get reportdatasource ----- -+ -.Example output -[source,terminal] ----- -NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE -... -node-allocatable-cpu-cores 2020-04-23T09:14:00Z 2020-08-31T10:07:00Z 2020-04-23T09:14:00Z 2020-10-15T17:13:00Z 2020-12-09T12:45:10Z 230d -node-allocatable-memory-bytes 2020-04-23T09:14:00Z 2020-08-30T05:19:00Z 2020-04-23T09:14:00Z 2020-10-14T08:01:00Z 2020-12-09T12:45:12Z 230d -... -pod-usage-memory-bytes 2020-04-23T09:14:00Z 2020-08-24T20:25:00Z 2020-04-23T09:14:00Z 2020-10-09T23:31:00Z 2020-12-09T12:45:12Z 230d ----- - -. Check the health of the `reporting-operator` `Pod` resource for a high number of pod restarts: -+ -[source,terminal] ----- -$ oc get pods -l app=reporting-operator ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -reporting-operator-84f7c9b7b6-fr697 2/2 Running 542 8d <1> ----- -<1> The Reporting Operator pod is restarting at a high rate. - -. Check the `reporting-operator` `Pod` resource for an `OOMKilled` termination: -+ -[source,terminal] ----- -$ oc describe pod/reporting-operator-84f7c9b7b6-fr697 ----- -+ -.Example output -[source,terminal] ----- -Name: reporting-operator-84f7c9b7b6-fr697 -Namespace: openshift-metering -Priority: 0 -Node: ip-10-xx-xx-xx.ap-southeast-1.compute.internal/10.xx.xx.xx -... - Ports: 8080/TCP, 6060/TCP, 8082/TCP - Host Ports: 0/TCP, 0/TCP, 0/TCP - State: Running - Started: Thu, 03 Dec 2020 20:59:45 +1000 - Last State: Terminated - Reason: OOMKilled <1> - Exit Code: 137 - Started: Thu, 03 Dec 2020 20:38:05 +1000 - Finished: Thu, 03 Dec 2020 20:59:43 +1000 ----- -<1> The Reporting Operator pod was terminated due to OOM kill. - - -[id="metering-check-and-increase-memory-limits_{context}"] -=== Increasing the reporting-operator pod memory limit - -If you are experiencing an increase in pod restarts and OOM kill events, you can check the current memory limit set for the Reporting Operator pod. Increasing the memory limit allows the Reporting Operator pod to update the report data sources. If necessary, increase the memory limit in your `MeteringConfig` resource by 25% - 50%. - -.Procedure - -. Check the current memory limits of the `reporting-operator` `Pod` resource: -+ -[source,terminal] ----- -$ oc describe pod reporting-operator-67d6f57c56-79mrt ----- -+ -.Example output -[source,terminal] ----- -Name: reporting-operator-67d6f57c56-79mrt -Namespace: openshift-metering -Priority: 0 -... - Ports: 8080/TCP, 6060/TCP, 8082/TCP - Host Ports: 0/TCP, 0/TCP, 0/TCP - State: Running - Started: Tue, 08 Dec 2020 14:26:21 +1000 - Ready: True - Restart Count: 0 - Limits: - cpu: 1 - memory: 500Mi <1> - Requests: - cpu: 500m - memory: 250Mi - Environment: -... ----- -<1> The current memory limit for the Reporting Operator pod. - -. Edit the `MeteringConfig` resource to update the memory limit: -+ -[source,terminal] ----- -$ oc edit meteringconfig/operator-metering ----- -+ -.Example `MeteringConfig` resource -[source,yaml] ----- -kind: MeteringConfig -metadata: - name: operator-metering - namespace: openshift-metering -spec: - reporting-operator: - spec: - resources: <1> - limits: - cpu: 1 - memory: 750Mi - requests: - cpu: 500m - memory: 500Mi -... ----- -<1> Add or increase memory limits within the `resources` field of the `MeteringConfig` resource. -+ -[NOTE] -==== -If there continue to be numerous OOM killed events after memory limits are increased, this might indicate that a different issue is causing the reports to be in a pending state. -==== - -[id="metering-storageclass-not-configured_{context}"] -== StorageClass resource not configured - -Metering requires that a default `StorageClass` resource be configured for dynamic provisioning. - -See the documentation on configuring metering for information on how to check if there are any `StorageClass` resources configured for the cluster, how to set the default, and how to configure metering to use a storage class other than the default. - -[id="metering-secret-not-configured-correctly_{context}"] -== Secret not configured correctly - -A common issue with metering is providing the incorrect secret when configuring your persistent storage. Be sure to review the example configuration files and create you secret according to the guidelines for your storage provider. diff --git a/modules/metering-uninstall-crds.adoc b/modules/metering-uninstall-crds.adoc deleted file mode 100644 index 66bd61ec3ecc..000000000000 --- a/modules/metering-uninstall-crds.adoc +++ /dev/null @@ -1,28 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-uninstall.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-uninstall-crds_{context}"] -= Uninstalling metering custom resource definitions - -The metering custom resource definitions (CRDs) remain in the cluster after the Metering Operator is uninstalled and the `openshift-metering` namespace is deleted. - -[IMPORTANT] -==== -Deleting the metering CRDs disrupts any additional metering installations in other namespaces in your cluster. Ensure that there are no other metering installations before proceeding. -==== - -.Prerequisites - -* The `MeteringConfig` custom resource in the `openshift-metering` namespace is deleted. -* The `openshift-metering` namespace is deleted. - -.Procedure - -* Delete the remaining metering CRDs: -+ -[source,terminal] ----- -$ oc get crd -o name | grep "metering.openshift.io" | xargs oc delete ----- diff --git a/modules/metering-uninstall.adoc b/modules/metering-uninstall.adoc deleted file mode 100644 index 4cfedd8bd188..000000000000 --- a/modules/metering-uninstall.adoc +++ /dev/null @@ -1,36 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-uninstall.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-uninstall_{context}"] -= Uninstalling a metering namespace - -Uninstall your metering namespace, for example the `openshift-metering` namespace, by removing the `MeteringConfig` resource and deleting the `openshift-metering` namespace. - -.Prerequisites - -* The Metering Operator is removed from your cluster. - -.Procedure - -. Remove all resources created by the Metering Operator: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering delete meteringconfig --all ----- - -. After the previous step is complete, verify that all pods in the `openshift-metering` namespace are deleted or are reporting a terminating state: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering get pods ----- - -. Delete the `openshift-metering` namespace: -+ -[source,terminal] ----- -$ oc delete namespace openshift-metering ----- diff --git a/modules/metering-use-mysql-or-postgresql-for-hive.adoc b/modules/metering-use-mysql-or-postgresql-for-hive.adoc deleted file mode 100644 index 38ebb49072ec..000000000000 --- a/modules/metering-use-mysql-or-postgresql-for-hive.adoc +++ /dev/null @@ -1,89 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-hive-metastore.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-use-mysql-or-postgresql-for-hive_{context}"] -= Using MySQL or PostgreSQL for the Hive metastore - -The default installation of metering configures Hive to use an embedded Java database called Derby. This is unsuited for larger environments and can be replaced with either a MySQL or PostgreSQL database. Use the following example configuration files if your deployment requires a MySQL or PostgreSQL database for Hive. - -There are three configuration options you can use to control the database that is used by Hive metastore: `url`, `driver`, and `secretName`. - -Create your MySQL or Postgres instance with a user name and password. Then create a secret by using the OpenShift CLI (`oc`) or a YAML file. The `secretName` you create for this secret must map to the `spec.hive.spec.config.db.secretName` field in the `MeteringConfig` object resource. - -.Procedure - -. Create a secret using the OpenShift CLI (`oc`) or by using a YAML file: -+ -* Create a secret by using the following command: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering create secret generic --from-literal=username= --from-literal=password= ----- -+ -* Create a secret by using a YAML file. For example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: <1> -data: - username: <2> - password: <3> ----- -<1> The name of the secret. -<2> Base64 encoded database user name. -<3> Base64 encoded database password. - -. Create a configuration file to use a MySQL or PostgreSQL database for Hive: -+ -* To use a MySQL database for Hive, use the example configuration file below. Metering supports configuring the internal Hive metastore to use the MySQL server versions 5.6, 5.7, and 8.0. -+ --- -[source,yaml] ----- -spec: - hive: - spec: - metastore: - storage: - create: false - config: - db: - url: "jdbc:mysql://mysql.example.com:3306/hive_metastore" <1> - driver: "com.mysql.cj.jdbc.Driver" - secretName: "REPLACEME" <2> ----- -[NOTE] -==== -When configuring Metering to work with older MySQL server versions, such as 5.6 or 5.7, you might need to add the link:https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-usagenotes-known-issues-limitations.html[`enabledTLSProtocols` JDBC URL parameter] when configuring the internal Hive metastore. -==== -<1> To use the TLS v1.2 cipher suite, set `url` to `"jdbc:mysql://:/?enabledTLSProtocols=TLSv1.2"`. -<2> The name of the secret containing the base64-encrypted user name and password database credentials. --- -+ -You can pass additional JDBC parameters using the `spec.hive.config.url`. For more details, see the link:https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-configuration-properties.html[MySQL Connector/J 8.0 documentation]. -+ -* To use a PostgreSQL database for Hive, use the example configuration file below: -+ -[source,yaml] ----- -spec: - hive: - spec: - metastore: - storage: - create: false - config: - db: - url: "jdbc:postgresql://postgresql.example.com:5432/hive_metastore" - driver: "org.postgresql.Driver" - username: "" - password: "" ----- -+ -You can pass additional JDBC parameters using the `spec.hive.config.url`. For more details, see the link:https://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters[PostgreSQL JDBC driver documentation]. diff --git a/modules/metering-viewing-report-results.adoc b/modules/metering-viewing-report-results.adoc deleted file mode 100644 index 7f6910c515e9..000000000000 --- a/modules/metering-viewing-report-results.adoc +++ /dev/null @@ -1,103 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-using-metering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-viewing-report-results_{context}"] -= Viewing report results - -Viewing a report's results involves querying the reporting API route and authenticating to the API using your {product-title} credentials. -Reports can be retrieved as `JSON`, `CSV`, or `Tabular` formats. - -.Prerequisites - -* Metering is installed. -* To access report results, you must either be a cluster administrator, or you need to be granted access using the `report-exporter` role in the `openshift-metering` namespace. - -.Procedure - -. Change to the `openshift-metering` project: -+ -[source,terminal] ----- -$ oc project openshift-metering ----- - -. Query the reporting API for results: - -.. Create a variable for the metering `reporting-api` route then get the route: -+ -[source,terminal] ----- -$ meteringRoute="$(oc get routes metering -o jsonpath='{.spec.host}')" ----- -+ -[source,terminal] ----- -$ echo "$meteringRoute" ----- - -.. Get the token of your current user to be used in the request: -+ -[source,terminal] ----- -$ token="$(oc whoami -t)" ----- - -.. Set `reportName` to the name of the report you created: -+ -[source,terminal] ----- -$ reportName=namespace-cpu-request-2020 ----- - -.. Set `reportFormat` to one of `csv`, `json`, or `tabular` to specify the output format of the API response: -+ -[source,terminal] ----- -$ reportFormat=csv ----- - -.. To get the results, use `curl` to make a request to the reporting API for your report: -+ -[source,terminal] ----- -$ curl --insecure -H "Authorization: Bearer ${token}" "https://${meteringRoute}/api/v1/reports/get?name=${reportName}&namespace=openshift-metering&format=$reportFormat" ----- -+ -.Example output with `reportName=namespace-cpu-request-2020` and `reportFormat=csv` -[source,terminal] ----- -period_start,period_end,namespace,pod_request_cpu_core_seconds -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-apiserver,11745.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-apiserver-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-authentication,522.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-authentication-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cloud-credential-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-machine-approver,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-node-tuning-operator,3385.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-samples-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-version,522.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-console,522.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-console-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-controller-manager,7830.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-controller-manager-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-dns,34372.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-dns-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-etcd,23490.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-image-registry,5993.400000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ingress,5220.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ingress-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver,12528.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager,8613.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-machine-api,1305.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-machine-config-operator,9637.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-metering,19575.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-monitoring,6256.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-network-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ovn-kubernetes,94503.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-service-ca,783.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-service-ca-operator,261.000000 ----- diff --git a/modules/metering-writing-reports.adoc b/modules/metering-writing-reports.adoc deleted file mode 100644 index 4f4538f1046d..000000000000 --- a/modules/metering-writing-reports.adoc +++ /dev/null @@ -1,73 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-using-metering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-writing-reports_{context}"] -= Writing Reports - -Writing a report is the way to process and analyze data using metering. - -To write a report, you must define a `Report` resource in a YAML file, specify the required parameters, and create it in the `openshift-metering` namespace. - -.Prerequisites - -* Metering is installed. - -.Procedure - -. Change to the `openshift-metering` project: -+ -[source,terminal] ----- -$ oc project openshift-metering ----- - -. Create a `Report` resource as a YAML file: -+ -.. Create a YAML file with the following content: -+ -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: namespace-cpu-request-2020 <2> - namespace: openshift-metering -spec: - reportingStart: '2020-01-01T00:00:00Z' - reportingEnd: '2020-12-30T23:59:59Z' - query: namespace-cpu-request <1> - runImmediately: true <3> ----- -<1> The `query` specifies the `ReportQuery` resources used to generate the report. Change this based on what you want to report on. For a list of options, run `oc get reportqueries | grep -v raw`. -<2> Use a descriptive name about what the report does for `metadata.name`. A good name describes the query, and the schedule or period you used. -<3> Set `runImmediately` to `true` for it to run with whatever data is available, or set it to `false` if you want it to wait for `reportingEnd` to pass. - -.. Run the following command to create the `Report` resource: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- -+ -.Example output -[source,terminal] ----- -report.metering.openshift.io/namespace-cpu-request-2020 created ----- -+ - -. You can list reports and their `Running` status with the following command: -+ -[source,terminal] ----- -$ oc get reports ----- -+ -.Example output -[source,terminal] ----- -NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE -namespace-cpu-request-2020 namespace-cpu-request Finished 2020-12-30T23:59:59Z 26s ----- diff --git a/modules/minimum-ibm-z-system-requirements.adoc b/modules/minimum-ibm-z-system-requirements.adoc deleted file mode 100644 index a855a5454c84..000000000000 --- a/modules/minimum-ibm-z-system-requirements.adoc +++ /dev/null @@ -1,120 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_ibm_z/installing-ibm-z.adoc -// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc -// * installing/installing_ibm_z/installing-ibm-z-lpar.adoc -// * installing/installing_ibm_z/installing-restricted-networks-ibm-z-lpar.adoc - -ifeval::["{context}" == "installing-ibm-z"] -:ibm-z: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:ibm-z: -endif::[] -ifeval::["{context}" == "installing-ibm-z-lpar"] -:ibm-z-lpar: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z-lpar"] -:ibm-z-lpar: -endif::[] - -:_mod-docs-content-type: CONCEPT -[id="minimum-ibm-z-system-requirements_{context}"] -= Minimum {ibm-z-title} system environment - -You can install {product-title} version {product-version} on the following {ibm-name} hardware: - -* {ibm-name} z16 (all models), {ibm-name} z15 (all models), {ibm-name} z14 (all models) -* {ibm-linuxone-name} 4 (all models), {ibm-linuxone-name} III (all models), {ibm-linuxone-name} Emperor II, {ibm-linuxone-name} Rockhopper II - -ifdef::ibm-z-lpar[] -[IMPORTANT] -==== -When running {product-title} on {ibm-z-name} without a hypervisor use the Dynamic Partition Manager (DPM) to manage your machine. -// Once blog url is available add: For details see blog... -==== -endif::ibm-z-lpar[] - - -== Hardware requirements - -* The equivalent of six Integrated Facilities for Linux (IFL), which are SMT2 enabled, for each cluster. -* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster. - -[NOTE] -==== -You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of {ibm-z-name}. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster. -==== - -[IMPORTANT] -==== -Since the overall performance of the cluster can be impacted, the LPARs that are used to set up the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role. -==== - - -== Operating system requirements - -ifdef::ibm-z[] -* One instance of z/VM 7.2 or later - -On your z/VM instance, set up: - -* Three guest virtual machines for {product-title} control plane machines -* Two guest virtual machines for {product-title} compute machines -* One guest virtual machine for the temporary {product-title} bootstrap machine -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* Five logical partitions (LPARs) -** Three LPARs for {product-title} control plane machines -** Two LPARs for {product-title} compute machines -* One machine for the temporary {product-title} bootstrap machine -endif::ibm-z-lpar[] - - -== {ibm-z-title} network connectivity requirements - -ifdef::ibm-z[] -To install on {ibm-z-name} under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: - -* A direct-attached OSA or RoCE network adapter -* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. -endif::ibm-z[] -ifdef::ibm-z-lpar[] -To install on {ibm-z-name} in an LPAR, you need: - -* A direct-attached OSA or RoCE network adapter -* For a preferred setup, use OSA link aggregation. -endif::ibm-z-lpar[] - - -=== Disk storage - -ifdef::ibm-z[] -* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. -* FCP attached disk storage -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. -* FCP attached disk storage -* NVMe disk storage -endif::ibm-z-lpar[] - - -=== Storage / Main Memory - -* 16 GB for {product-title} control plane machines -* 8 GB for {product-title} compute machines -* 16 GB for the temporary {product-title} bootstrap machine - -ifeval::["{context}" == "installing-ibm-z"] -:!ibm-z: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:!ibm-z: -endif::[] -ifeval::["{context}" == "installing-ibm-z-lpar"] -:!ibm-z-lpar: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z-lpar"] -:!ibm-z-lpar: -endif::[] \ No newline at end of file diff --git a/modules/mod-docs-ocp-conventions.adoc b/modules/mod-docs-ocp-conventions.adoc deleted file mode 100644 index 663fbc77d674..000000000000 --- a/modules/mod-docs-ocp-conventions.adoc +++ /dev/null @@ -1,154 +0,0 @@ -// Module included in the following assemblies: -// -// * mod_docs_guide/mod-docs-conventions-ocp.adoc - -// Base the file name and the ID on the module title. For example: -// * file name: my-reference-a.adoc -// * ID: [id="my-reference-a"] -// * Title: = My reference A - -[id="mod-docs-ocp-conventions_{context}"] -= Modular docs OpenShift conventions - -These Modular Docs conventions for OpenShift docs build on top of the CCS -modular docs guidelines. - -These guidelines and conventions should be read along with the: - -* General CCS -link:https://redhat-documentation.github.io/modular-docs/[modular docs guidelines]. -* link:https://redhat-documentation.github.io/asciidoc-markup-conventions/[AsciiDoc markup conventions] -* link:https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/contributing.adoc[OpenShift Contribution Guide] -* link:https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[OpenShift Documentation Guidelines] - -IMPORTANT: If some convention is duplicated, the convention in this guide -supersedes all others. - -[id="ocp-ccs-conventions_{context}"] -== OpenShift CCS conventions - -* All assemblies must define a context that is unique. -+ -Add this context at the top of the page, just before the first anchor id. -+ -Example: -+ ----- -:context: assembly-gsg ----- - -* All assemblies must include the `_attributes/common-attributes.adoc` file near the -context statement. This file contains the standard attributes for the collection. -+ -`include::_attributes/common-attributes.adoc[leveloffset=+1]` - -* All anchor ids must follow the format: -+ ----- -[id="_{context}"] ----- -+ -Anchor name is _connected_ to the `{context}` using a dash. -+ -Example: -+ ----- -[id="creating-your-first-content_{context}"] ----- - -* All modules anchor ids must have the `{context}` variable. -+ -This is just reiterating the format described in the previous bullet point. - -* A comment section must be present at the top of each module and assembly, as -shown in the link:https://github.com/redhat-documentation/modular-docs/tree/master/modular-docs-manual/files[modular docs templates]. -+ -The modules comment section must list which assemblies this module has been -included in, while the assemblies comment section must include other assemblies -that it itself is included in, if any. -+ -Example comment section in an assembly: -+ ----- -// This assembly is included in the following assemblies: -// -// NONE ----- -+ -Example comment section in a module: -+ ----- -// Module included in the following assemblies: -// -// mod_docs_guide/mod-docs-conventions-ocp.adoc ----- - -* All modules must go in the modules directory which is present in the top level -of the openshift-docs repository. These modules must follow the file naming -conventions specified in the -link:https://redhat-documentation.github.io/modular-docs/[modular docs guidelines]. - -* All assemblies must go in the relevant guide/book. If you cannot find a relevant - guide/book, reach out to a member of the OpenShift CCS team. So guides/books contain assemblies, which - contain modules. - -* modules and images folders are symlinked to the top level folder from each book/guide folder. - -* In your assemblies, when you are linking to the content in other books, you must -use the relative path starting like so: -+ ----- -xref:../architecture/architecture.adoc#architecture[architecture] overview. ----- -+ -[IMPORTANT] -==== -You must not include xrefs in modules or create an xref to a module. You can -only use xrefs to link from one assembly to another. -==== - -* All modules in assemblies must be included using the following format (replace 'ilude' with 'include'): -+ -`ilude::modules/.adoc[]` -+ -_OR_ -+ -`ilude::modules/.adoc[leveloffset=+]` -+ -if it requires a leveloffset. -+ -Example: -+ -`include::modules/creating-your-first-content.adoc[leveloffset=+1]` - -NOTE: There is no `..` at the starting of the path. - -//// -* If your assembly is in a subfolder of a guide/book directory, you must add a -statement to the assembly's metadata to use `relfileprefix`. -+ -This adjusts all the xref links in your modules to start from the root -directory. -+ -At the top of the assembly (in the metadata section), add the following line: -+ ----- -:relfileprefix: ../ ----- -+ -NOTE: There is a space between the second : and the ../. - -+ -The only difference in including a module in the _install_config/index.adoc_ -assembly and _install_config/install/planning.adoc_ assembly is the addition of -the `:relfileprefix: ../` attribute at the top of the -_install_config/install/planning.adoc_ assembly. The actual inclusion of -module remains the same as described in the previous bullet. - -+ -NOTE: This strategy is in place so that links resolve correctly on both -docs.openshift.com and portal docs. -//// - -* Do not use 3rd level folders even though AsciiBinder permits it. If you need -to, work out a better way to organize your content. diff --git a/modules/multi-architecture-scheduling-overview.adoc b/modules/multi-architecture-scheduling-overview.adoc deleted file mode 100644 index 4bc4c782604a..000000000000 --- a/modules/multi-architecture-scheduling-overview.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// module included in the following assembly -// -//post_installation_configuration/configuring-multi-arch-compute-machines/multi-architecture-compute-managing.adoc - -:_mod-docs-content-type: CONCEPT -[id="multi-architecture-scheduling-overview_{context}"] -= Scheduling workloads on clusters with multi-architecture compute machines - -Before deploying a workload onto a cluster with compute nodes of different architectures, you must configure your compute node scheduling process so the pods in your cluster are correctly assigned. - -You can schedule workloads onto multi-architecture nodes for your cluster in several ways. For example, you can use a node affinity or a node selector to select the node you want the pod to schedule onto. You can also use scheduling mechanisms, like taints and tolderations, when using node affinity or node selector to correctly schedule workloads. - - diff --git a/modules/nbde-managing-encryption-keys.adoc b/modules/nbde-managing-encryption-keys.adoc deleted file mode 100644 index 25d0849f1a78..000000000000 --- a/modules/nbde-managing-encryption-keys.adoc +++ /dev/null @@ -1,10 +0,0 @@ -// Module included in the following assemblies: -// -// security/nbde-implementation-guide.adoc - -[id="nbde-managing-encryption-keys_{context}"] -= Tang server encryption key management - -The cryptographic mechanism to recreate the encryption key is based on the _blinded key_ stored on the node and the private key of the involved Tang servers. To protect against the possibility of an attacker who has obtained both the Tang server private key and the node’s encrypted disk, periodic rekeying is advisable. - -You must perform the rekeying operation for every node before you can delete the old key from the Tang server. The following sections provide procedures for rekeying and deleting old keys. diff --git a/modules/nw-egress-ips-automatic.adoc b/modules/nw-egress-ips-automatic.adoc deleted file mode 100644 index 38e17a6a92a0..000000000000 --- a/modules/nw-egress-ips-automatic.adoc +++ /dev/null @@ -1,89 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/openshift_sdn/assigning-egress-ips.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-ips-automatic_{context}"] -= Configuring automatically assigned egress IP addresses for a namespace - -In {product-title} you can enable automatic assignment of an egress IP address -for a specific namespace across one or more nodes. - -.Prerequisites - -* You have access to the cluster as a user with the `cluster-admin` role. -* You have installed the OpenShift CLI (`oc`). - -.Procedure - -. Update the `NetNamespace` object with the egress IP address using the -following JSON: -+ -[source,terminal] ----- - $ oc patch netnamespace --type=merge -p \ - '{ - "egressIPs": [ - "" - ] - }' ----- -+ --- -where: - -``:: Specifies the name of the project. -``:: Specifies one or more egress IP addresses for the `egressIPs` array. --- -+ -For example, to assign `project1` to an IP address of 192.168.1.100 and -`project2` to an IP address of 192.168.1.101: -+ -[source,terminal] ----- -$ oc patch netnamespace project1 --type=merge -p \ - '{"egressIPs": ["192.168.1.100"]}' -$ oc patch netnamespace project2 --type=merge -p \ - '{"egressIPs": ["192.168.1.101"]}' ----- -+ -[NOTE] -==== -Because OpenShift SDN manages the `NetNamespace` object, you can make changes only by modifying the existing `NetNamespace` object. Do not create a new `NetNamespace` object. -==== - -. Indicate which nodes can host egress IP addresses by setting the `egressCIDRs` -parameter for each host using the following JSON: -+ -[source,terminal] ----- -$ oc patch hostsubnet --type=merge -p \ - '{ - "egressCIDRs": [ - "", "" - ] - }' ----- -+ --- -where: - -``:: Specifies a node name. -``:: Specifies an IP address range in CIDR format. You can specify more than one address range for the `egressCIDRs` array. --- -+ -For example, to set `node1` and `node2` to host egress IP addresses -in the range 192.168.1.0 to 192.168.1.255: -+ -[source,terminal] ----- -$ oc patch hostsubnet node1 --type=merge -p \ - '{"egressCIDRs": ["192.168.1.0/24"]}' -$ oc patch hostsubnet node2 --type=merge -p \ - '{"egressCIDRs": ["192.168.1.0/24"]}' ----- -+ -{product-title} automatically assigns specific egress IP addresses to -available nodes in a balanced way. In this case, it assigns the egress IP -address 192.168.1.100 to `node1` and the egress IP address 192.168.1.101 to -`node2` or vice versa. diff --git a/modules/nw-egress-ips-static.adoc b/modules/nw-egress-ips-static.adoc deleted file mode 100644 index 7b2dd2863fa5..000000000000 --- a/modules/nw-egress-ips-static.adoc +++ /dev/null @@ -1,86 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-ips-static_{context}"] -= Configuring manually assigned egress IP addresses for a namespace - -In {product-title} you can associate one or more egress IP addresses with a namespace. - -.Prerequisites - -* You have access to the cluster as a user with the `cluster-admin` role. -* You have installed the OpenShift CLI (`oc`). - -.Procedure - -. Update the `NetNamespace` object by specifying the following JSON -object with the desired IP addresses: -+ -[source,terminal] ----- - $ oc patch netnamespace --type=merge -p \ - '{ - "egressIPs": [ - "" - ] - }' ----- -+ --- -where: - -``:: Specifies the name of the project. -``:: Specifies one or more egress IP addresses for the `egressIPs` array. --- -+ -For example, to assign the `project1` project to the IP addresses `192.168.1.100` and `192.168.1.101`: -+ -[source,terminal] ----- -$ oc patch netnamespace project1 --type=merge \ - -p '{"egressIPs": ["192.168.1.100","192.168.1.101"]}' ----- -+ -To provide high availability, set the `egressIPs` value to two or more IP addresses on different nodes. If multiple egress IP addresses are set, then pods use all egress IP addresses roughly equally. -+ -[NOTE] -==== -Because OpenShift SDN manages the `NetNamespace` object, you can make changes only by modifying the existing `NetNamespace` object. Do not create a new `NetNamespace` object. -==== - -. Manually assign the egress IP address to the node hosts. -+ -If your cluster is installed on public cloud infrastructure, you must confirm that the node has available IP address capacity. -+ -Set the `egressIPs` parameter on the `HostSubnet` object on the node host. Using the following JSON, include as many IP addresses as you want to assign to that node host: -+ -[source,terminal] ----- -$ oc patch hostsubnet --type=merge -p \ - '{ - "egressIPs": [ - "", - "" - ] - }' ----- -+ --- -where: - -``:: Specifies a node name. -``:: Specifies an IP address. You can specify more than one IP address for the `egressIPs` array. --- -+ -For example, to specify that `node1` should have the egress IPs `192.168.1.100`, -`192.168.1.101`, and `192.168.1.102`: -+ -[source,terminal] ----- -$ oc patch hostsubnet node1 --type=merge -p \ - '{"egressIPs": ["192.168.1.100", "192.168.1.101", "192.168.1.102"]}' ----- -+ -In the previous example, all egress traffic for `project1` will be routed to the node hosting the specified egress IP, and then connected through Network Address Translation (NAT) to that IP address. diff --git a/modules/nw-egress-router-configmap.adoc b/modules/nw-egress-router-configmap.adoc deleted file mode 100644 index 34fe4629a990..000000000000 --- a/modules/nw-egress-router-configmap.adoc +++ /dev/null @@ -1,92 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="configuring-egress-router-configmap_{context}"] -= Configuring an egress router destination mappings with a config map - -For a large or frequently-changing set of destination mappings, you can use a config map to externally maintain the list. -An advantage of this approach is that permission to edit the config map can be delegated to users without `cluster-admin` privileges. Because the egress router pod requires a privileged container, it is not possible for users without `cluster-admin` privileges to edit the pod definition directly. - -[NOTE] -==== -The egress router pod does not automatically update when the config map changes. -You must restart the egress router pod to get updates. -==== - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* Log in as a user with `cluster-admin` privileges. - -.Procedure - -. Create a file containing the mapping data for the egress router pod, as in the following example: -+ ----- -# Egress routes for Project "Test", version 3 - -80 tcp 203.0.113.25 - -8080 tcp 203.0.113.26 80 -8443 tcp 203.0.113.26 443 - -# Fallback -203.0.113.27 ----- -+ -You can put blank lines and comments into this file. - -. Create a `ConfigMap` object from the file: -+ -[source,terminal] ----- -$ oc delete configmap egress-routes --ignore-not-found ----- -+ -[source,terminal] ----- -$ oc create configmap egress-routes \ - --from-file=destination=my-egress-destination.txt ----- -+ -In the previous command, the `egress-routes` value is the name of the `ConfigMap` object to create and `my-egress-destination.txt` is the name of the file that the data is read from. -+ -[TIP] -==== -You can alternatively apply the following YAML to create the config map: - -[source,yaml] ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: egress-routes -data: - destination: | - # Egress routes for Project "Test", version 3 - - 80 tcp 203.0.113.25 - - 8080 tcp 203.0.113.26 80 - 8443 tcp 203.0.113.26 443 - - # Fallback - 203.0.113.27 ----- -==== - -. Create an egress router pod definition and specify the `configMapKeyRef` stanza for the `EGRESS_DESTINATION` field in the environment stanza: -+ -[source,yaml] ----- -... -env: -- name: EGRESS_DESTINATION - valueFrom: - configMapKeyRef: - name: egress-routes - key: destination -... ----- diff --git a/modules/nw-egress-router-dest-var.adoc b/modules/nw-egress-router-dest-var.adoc deleted file mode 100644 index 48bf1bc9b5cf..000000000000 --- a/modules/nw-egress-router-dest-var.adoc +++ /dev/null @@ -1,107 +0,0 @@ -// Module included in the following assemblies: -// -// * // * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -// Every redirection mode supports an expanded environment variable - -// Conditional per flavor of Pod -ifeval::["{context}" == "deploying-egress-router-layer3-redirection"] -:redirect: -endif::[] -ifeval::["{context}" == "deploying-egress-router-http-redirection"] -:http: -endif::[] -ifeval::["{context}" == "deploying-egress-router-dns-redirection"] -:dns: -endif::[] - -[id="nw-egress-router-dest-var_{context}"] -= Egress destination configuration format - -ifdef::redirect[] -When an egress router pod is deployed in redirect mode, you can specify redirection rules by using one or more of the following formats: - -- ` ` - Incoming connections to the given `` should be redirected to the same port on the given ``. `` is either `tcp` or `udp`. -- ` ` - As above, except that the connection is redirected to a different `` on ``. -- `` - If the last line is a single IP address, then any connections on any other port will be redirected to the corresponding port on that IP address. If there is no fallback IP address then connections on other ports are rejected. - -In the example that follows several rules are defined: - -- The first line redirects traffic from local port `80` to port `80` on `203.0.113.25`. -- The second and third lines redirect local ports `8080` and `8443` to remote ports `80` and `443` on `203.0.113.26`. -- The last line matches traffic for any ports not specified in the previous rules. - -.Example configuration -[source,text] ----- -80 tcp 203.0.113.25 -8080 tcp 203.0.113.26 80 -8443 tcp 203.0.113.26 443 -203.0.113.27 ----- -endif::redirect[] - -ifdef::http[] -When an egress router pod is deployed in HTTP proxy mode, you can specify redirection rules by using one or more of the following formats. Each line in the configuration specifies one group of connections to allow or deny: - -- An IP address allows connections to that IP address, such as `192.168.1.1`. -- A CIDR range allows connections to that CIDR range, such as `192.168.1.0/24`. -- A hostname allows proxying to that host, such as `www.example.com`. -- A domain name preceded by `+*.+` allows proxying to that domain and all of its subdomains, such as `*.example.com`. -- A `!` followed by any of the previous match expressions denies the connection instead. -- If the last line is `*`, then anything that is not explicitly denied is allowed. Otherwise, anything that is not allowed is denied. - -You can also use `*` to allow connections to all remote destinations. - -.Example configuration -[source,text] ----- -!*.example.com -!192.168.1.0/24 -192.168.2.1 -* ----- -endif::http[] - -ifdef::dns[] -When the router is deployed in DNS proxy mode, you specify a list of port and destination mappings. A destination may be either an IP address or a DNS name. - -An egress router pod supports the following formats for specifying port and destination mappings: - -Port and remote address:: - -You can specify a source port and a destination host by using the two field format: ` `. - -The host can be an IP address or a DNS name. If a DNS name is provided, DNS resolution occurs at runtime. For a given host, the proxy connects to the specified source port on the destination host when connecting to the destination host IP address. - -.Port and remote address pair example -[source,text] ----- -80 172.16.12.11 -100 example.com ----- - -Port, remote address, and remote port:: - -You can specify a source port, a destination host, and a destination port by using the three field format: ` `. - -The three field format behaves identically to the two field version, with the exception that the destination port can be different than the source port. - -.Port, remote address, and remote port example -[source,text] ----- -8080 192.168.60.252 80 -8443 web.example.com 443 ----- -endif::dns[] - -// unload flavors -ifdef::redirect[] -:!redirect: -endif::[] -ifdef::http[] -:!http: -endif::[] -ifdef::dns[] -:!dns: -endif::[] diff --git a/modules/nw-egress-router-dns-mode.adoc b/modules/nw-egress-router-dns-mode.adoc deleted file mode 100644 index 098ca56f3628..000000000000 --- a/modules/nw-egress-router-dns-mode.adoc +++ /dev/null @@ -1,68 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-router-dns-mode_{context}"] -= Deploying an egress router pod in DNS proxy mode - -In _DNS proxy mode_, an egress router pod acts as a DNS proxy for TCP-based services from its own IP address to one or more destination IP addresses. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* Log in as a user with `cluster-admin` privileges. - -.Procedure - -. Create an egress router pod. - -. Create a service for the egress router pod: - -.. Create a file named `egress-router-service.yaml` that contains the following YAML. Set `spec.ports` to the list of ports that you defined previously for the `EGRESS_DNS_PROXY_DESTINATION` environment variable. -+ -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: egress-dns-svc -spec: - ports: - ... - type: ClusterIP - selector: - name: egress-dns-proxy ----- -+ -For example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: egress-dns-svc -spec: - ports: - - name: con1 - protocol: TCP - port: 80 - targetPort: 80 - - name: con2 - protocol: TCP - port: 100 - targetPort: 100 - type: ClusterIP - selector: - name: egress-dns-proxy ----- - -.. To create the service, enter the following command: -+ -[source,terminal] ----- -$ oc create -f egress-router-service.yaml ----- -+ -Pods can now connect to this service. The connections are proxied to the corresponding ports on the external server, using the reserved egress IP address. diff --git a/modules/nw-egress-router-http-proxy-mode.adoc b/modules/nw-egress-router-http-proxy-mode.adoc deleted file mode 100644 index dda6db2be6da..000000000000 --- a/modules/nw-egress-router-http-proxy-mode.adoc +++ /dev/null @@ -1,62 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-router-http-proxy-mode_{context}"] -= Deploying an egress router pod in HTTP proxy mode - -In _HTTP proxy mode_, an egress router pod runs as an HTTP proxy on port `8080`. This mode only works for clients that are connecting to HTTP-based or HTTPS-based services, but usually requires fewer changes to the client pods to get them to work. Many programs can be told to use an HTTP proxy by setting an environment variable. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* Log in as a user with `cluster-admin` privileges. - -.Procedure - -. Create an egress router pod. - -. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: egress-1 -spec: - ports: - - name: http-proxy - port: 8080 <1> - type: ClusterIP - selector: - name: egress-1 ----- -<1> Ensure the `http` port is set to `8080`. - -. To configure the client pod (not the egress proxy pod) to use the HTTP proxy, set the `http_proxy` or `https_proxy` variables: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: app-1 - labels: - name: app-1 -spec: - containers: - env: - - name: http_proxy - value: http://egress-1:8080/ <1> - - name: https_proxy - value: http://egress-1:8080/ - ... ----- -<1> The service created in the previous step. -+ -[NOTE] -==== -Using the `http_proxy` and `https_proxy` environment variables is not necessary for all setups. If the above does not create a working setup, then consult the documentation for the tool or software you are running in the pod. -==== diff --git a/modules/nw-egress-router-pod.adoc b/modules/nw-egress-router-pod.adoc deleted file mode 100644 index 0ba1308d15b6..000000000000 --- a/modules/nw-egress-router-pod.adoc +++ /dev/null @@ -1,231 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -// Conditional per flavor of Pod -ifeval::["{context}" == "deploying-egress-router-layer3-redirection"] -:redirect: -:router-type: redirect -endif::[] -ifeval::["{context}" == "deploying-egress-router-http-redirection"] -:http: -:router-type: HTTP -endif::[] -ifeval::["{context}" == "deploying-egress-router-dns-redirection"] -:dns: -:router-type: DNS -endif::[] - -:egress-router-image-name: openshift4/ose-egress-router -:egress-router-image-url: registry.redhat.io/{egress-router-image-name} - -ifdef::http[] -:egress-http-proxy-image-name: openshift4/ose-egress-http-proxy -:egress-http-proxy-image-url: registry.redhat.io/{egress-http-proxy-image-name} -endif::[] -ifdef::dns[] -:egress-dns-proxy-image-name: openshift4/ose-egress-dns-proxy -:egress-dns-proxy-image-url: registry.redhat.io/{egress-dns-proxy-image-name} -endif::[] -ifdef::redirect[] -:egress-pod-image-name: openshift4/ose-pod -:egress-pod-image-url: registry.redhat.io/{egress-pod-image-name} -endif::[] - -// All the images are different for OKD -ifdef::openshift-origin[] - -:egress-router-image-name: openshift/origin-egress-router -:egress-router-image-url: {egress-router-image-name} - -ifdef::http[] -:egress-http-proxy-image-name: openshift/origin-egress-http-proxy -:egress-http-proxy-image-url: {egress-http-proxy-image-name} -endif::[] -ifdef::dns[] -:egress-dns-proxy-image-name: openshift/origin-egress-dns-proxy -:egress-dns-proxy-image-url: {egress-dns-proxy-image-name} -endif::[] -ifdef::redirect[] -:egress-pod-image-name: openshift/origin-pod -:egress-pod-image-url: {egress-pod-image-name} -endif::[] - -endif::openshift-origin[] - -[id="nw-egress-router-pod_{context}"] -= Egress router pod specification for {router-type} mode - -Define the configuration for an egress router pod in the `Pod` object. The following YAML describes the fields for the configuration of an egress router pod in {router-type} mode: - -// Because redirect needs privileged access to setup `EGRESS_DESTINATION` -// and the other modes do not, this ends up needing its own almost -// identical Pod. It's not possible to use conditionals for an unequal -// number of callouts. - -ifdef::redirect[] -[source,yaml,subs="attributes+"] ----- -apiVersion: v1 -kind: Pod -metadata: - name: egress-1 - labels: - name: egress-1 - annotations: - pod.network.openshift.io/assign-macvlan: "true" <1> -spec: - initContainers: - - name: egress-router - image: {egress-router-image-url} - securityContext: - privileged: true - env: - - name: EGRESS_SOURCE <2> - value: - - name: EGRESS_GATEWAY <3> - value: - - name: EGRESS_DESTINATION <4> - value: - - name: EGRESS_ROUTER_MODE - value: init - containers: - - name: egress-router-wait - image: {egress-pod-image-url} ----- -<1> The annotation tells {product-title} to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the `"true"` value. To have {product-title} create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, `eth1`. -<2> IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the `/24` suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the `EGRESS_GATEWAY` variable and no other hosts on the subnet. -<3> Same value as the default gateway used by the node. -<4> External server to direct traffic to. Using this example, connections to the pod are redirected to `203.0.113.25`, with a source IP address of `192.168.12.99`. - -.Example egress router pod specification -[source,yaml,subs="attributes+"] ----- -apiVersion: v1 -kind: Pod -metadata: - name: egress-multi - labels: - name: egress-multi - annotations: - pod.network.openshift.io/assign-macvlan: "true" -spec: - initContainers: - - name: egress-router - image: {egress-router-image-url} - securityContext: - privileged: true - env: - - name: EGRESS_SOURCE - value: 192.168.12.99/24 - - name: EGRESS_GATEWAY - value: 192.168.12.1 - - name: EGRESS_DESTINATION - value: | - 80 tcp 203.0.113.25 - 8080 tcp 203.0.113.26 80 - 8443 tcp 203.0.113.26 443 - 203.0.113.27 - - name: EGRESS_ROUTER_MODE - value: init - containers: - - name: egress-router-wait - image: {egress-pod-image-url} ----- -endif::redirect[] - -// Many conditionals because DNS offers one additional env variable. - -ifdef::dns,http[] -[source,yaml,subs="attributes+"] ----- -apiVersion: v1 -kind: Pod -metadata: - name: egress-1 - labels: - name: egress-1 - annotations: - pod.network.openshift.io/assign-macvlan: "true" <1> -spec: - initContainers: - - name: egress-router - image: {egress-router-image-url} - securityContext: - privileged: true - env: - - name: EGRESS_SOURCE <2> - value: - - name: EGRESS_GATEWAY <3> - value: - - name: EGRESS_ROUTER_MODE -ifdef::dns[] - value: dns-proxy -endif::dns[] -ifdef::http[] - value: http-proxy -endif::http[] - containers: - - name: egress-router-pod -ifdef::dns[] - image: {egress-dns-proxy-image-url} - securityContext: - privileged: true -endif::dns[] -ifdef::http[] - image: {egress-http-proxy-image-url} -endif::http[] - env: -ifdef::http[] - - name: EGRESS_HTTP_PROXY_DESTINATION <4> - value: |- - ... -endif::http[] -ifdef::dns[] - - name: EGRESS_DNS_PROXY_DESTINATION <4> - value: |- - ... - - name: EGRESS_DNS_PROXY_DEBUG <5> - value: "1" -endif::dns[] - ... ----- -<1> The annotation tells {product-title} to create a macvlan network interface on the primary network interface controller (NIC) and move that macvlan interface into the pod's network namespace. You must include the quotation marks around the `"true"` value. To have {product-title} create the macvlan interface on a different NIC interface, set the annotation value to the name of that interface. For example, `eth1`. -<2> IP address from the physical network that the node is on that is reserved for use by the egress router pod. Optional: You can include the subnet length, the `/24` suffix, so that a proper route to the local subnet is set. If you do not specify a subnet length, then the egress router can access only the host specified with the `EGRESS_GATEWAY` variable and no other hosts on the subnet. -<3> Same value as the default gateway used by the node. -ifdef::http[] -<4> A string or YAML multi-line string specifying how to configure the proxy. Note that this is specified as an environment variable in the HTTP proxy container, not with the other environment variables in the init container. -endif::http[] -ifdef::dns[] -<4> Specify a list of one or more proxy destinations. -<5> Optional: Specify to output the DNS proxy log output to `stdout`. -endif::dns[] -endif::[] - -// unload flavors -ifdef::redirect[] -:!redirect: -endif::[] -ifdef::http[] -:!http: -endif::[] -ifdef::dns[] -:!dns: -endif::[] -ifdef::router-type[] -:!router-type: -endif::[] - -// unload images -ifdef::egress-router-image-name[] -:!egress-router-image-name: -endif::[] -ifdef::egress-router-image-url[] -:!egress-router-image-url: -endif::[] -ifdef::egress-pod-image-name[] -:!egress-pod-image-name: -endif::[] -ifdef::egress-pod-image-url[] -:!egress-pod-image-url: -endif::[] diff --git a/modules/nw-egress-router-redirect-mode.adoc b/modules/nw-egress-router-redirect-mode.adoc deleted file mode 100644 index 7ee506882f99..000000000000 --- a/modules/nw-egress-router-redirect-mode.adoc +++ /dev/null @@ -1,46 +0,0 @@ -// Module included in the following assemblies: -// -// * This module is unused from 4.17+ with removal of SDN. Nwt, team is leaving it incase RFE is made for OVN-K updates on this. Currently, we use CRD instead of manual configuring. - -:_mod-docs-content-type: PROCEDURE -[id="nw-egress-router-redirect-mode_{context}"] -= Deploying an egress router pod in redirect mode - -In _redirect mode_, an egress router pod sets up iptables rules to redirect traffic from its own IP address to one or more destination IP addresses. Client pods that need to use the reserved source IP address must be configured to access the service for the egress router rather than connecting directly to the destination IP. You can access the destination service and port from the application pod by using the `curl` command. For example: - -[source,terminal] ----- -$ curl ----- - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* Log in as a user with `cluster-admin` privileges. - -.Procedure - -. Create an egress router pod. - -. To ensure that other pods can find the IP address of the egress router pod, create a service to point to the egress router pod, as in the following example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: egress-1 -spec: - ports: - - name: http - port: 80 - - name: https - port: 443 - type: ClusterIP - selector: - name: egress-1 ----- -+ -Your pods can now connect to this service. Their connections are redirected to -the corresponding ports on the external server, using the reserved egress IP -address. diff --git a/modules/nw-ingress-integrating-route-secret-certificate.adoc b/modules/nw-ingress-integrating-route-secret-certificate.adoc deleted file mode 100644 index 4ac39bf62fa0..000000000000 --- a/modules/nw-ingress-integrating-route-secret-certificate.adoc +++ /dev/null @@ -1,6 +0,0 @@ -// -// * ingress/routes.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-ingress-integrating-route-secret-certificate_{context}"] -= Securing route with external certificates in TLS secrets diff --git a/modules/nw-multinetwork-sriov.adoc b/modules/nw-multinetwork-sriov.adoc deleted file mode 100644 index f0e45cb242ba..000000000000 --- a/modules/nw-multinetwork-sriov.adoc +++ /dev/null @@ -1,314 +0,0 @@ -// Module name: nw_multinetwork-sriov.adoc -// Module included in the following assemblies: -// -// * networking/managing_multinetworking.adoc - -:image-prefix: ose - -ifdef::openshift-origin[] -:image-prefix: origin -endif::openshift-origin[] - -[id="nw-multinetwork-sriov_{context}"] -= Configuring SR-IOV - -{product-title} includes the capability to use SR-IOV hardware on -{product-title} nodes, which enables you to attach SR-IOV virtual function (VF) -interfaces to Pods in addition to other network interfaces. - -Two components are required to provide this capability: the SR-IOV network -device plug-in and the SR-IOV CNI plug-in. - -* The SR-IOV network device plug-in is a Kubernetes device plug-in for -discovering, advertising, and allocating SR-IOV network virtual function (VF) -resources. Device plug-ins are used in Kubernetes to enable the use of limited -resources, typically in physical devices. Device plug-ins give the Kubernetes -scheduler awareness of which resources are exhausted, allowing Pods to be -scheduled to worker nodes that have sufficient resources available. - -* The SR-IOV CNI plug-in plumbs VF interfaces allocated from the SR-IOV device -plug-in directly into a Pod. - -== Supported Devices - -The following Network Interface Card (NIC) models are supported in -{product-title}: - -* Intel XXV710-DA2 25G card with vendor ID 0x8086 and device ID 0x158b -* Mellanox MT27710 Family [ConnectX-4 Lx] 25G card with vendor ID 0x15b3 -and device ID 0x1015 -* Mellanox MT27800 Family [ConnectX-5] 100G card with vendor ID 0x15b3 -and device ID 0x1017 - -[NOTE] -==== -For Mellanox cards, ensure that SR-IOV is enabled in the firmware before -provisioning VFs on the host. -==== - -== Creating SR-IOV plug-ins and daemonsets - -[NOTE] -==== -The creation of SR-IOV VFs is not handled by the SR-IOV device plug-in and -SR-IOV CNI. -To provision SR-IOV VF on hosts, you must configure it manually. -==== - -To use the SR-IOV network device plug-in and SR-IOV CNI plug-in, run both -plug-ins in daemon mode on each node in your cluster. - -. Create a YAML file for the `openshift-sriov` namespace with the following -contents: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-sriov - labels: - name: openshift-sriov - openshift.io/run-level: "0" - annotations: - openshift.io/node-selector: "" - openshift.io/description: "Openshift SR-IOV network components" ----- - -. Run the following command to create the `openshift-sriov` namespace: -+ ----- -$ oc create -f openshift-sriov.yaml ----- - -. Create a YAML file for the `sriov-device-plugin` service account with the -following contents: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: sriov-device-plugin - namespace: openshift-sriov ----- - -. Run the following command to create the `sriov-device-plugin` service account: -+ ----- -$ oc create -f sriov-device-plugin.yaml ----- - -. Create a YAML file for the `sriov-cni` service account with the following -contents: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: sriov-cni - namespace: openshift-sriov ----- - -. Run the following command to create the `sriov-cni` service account: -+ ----- -$ oc create -f sriov-cni.yaml ----- - -. Create a YAML file for the `sriov-device-plugin` DaemonSet with the following -contents: -+ -[NOTE] -==== -The SR-IOV network device plug-in daemon, when launched, will discover all the -configured SR-IOV VFs (of supported NIC models) on each node and advertise -discovered resources. The number of available SR-IOV VF resources that are -capable of being allocated can be reviewed by describing a node with the -[command]`oc describe node ` command. The resource name for the -SR-IOV VF resources is `openshift.io/sriov`. When no SR-IOV VFs are available on -the node, a value of zero is displayed. -==== -+ -[source,yaml,subs="attributes"] ----- -kind: DaemonSet -apiVersion: apps/v1 -metadata: - name: sriov-device-plugin - namespace: openshift-sriov - annotations: - kubernetes.io/description: | - This daemon set launches the SR-IOV network device plugin on each node. -spec: - selector: - matchLabels: - app: sriov-device-plugin - updateStrategy: - type: RollingUpdate - template: - metadata: - labels: - app: sriov-device-plugin - component: network - type: infra - openshift.io/component: network - spec: - hostNetwork: true - nodeSelector: - beta.kubernetes.io/os: linux - tolerations: - - operator: Exists - serviceAccountName: sriov-device-plugin - containers: - - name: sriov-device-plugin - image: quay.io/openshift/{image-prefix}-sriov-network-device-plugin:v4.0.0 - args: - - --log-level=10 - securityContext: - privileged: true - volumeMounts: - - name: devicesock - mountPath: /var/lib/kubelet/ - readOnly: false - - name: net - mountPath: /sys/class/net - readOnly: true - volumes: - - name: devicesock - hostPath: - path: /var/lib/kubelet/ - - name: net - hostPath: - path: /sys/class/net ----- - -. Run the following command to create the `sriov-device-plugin` DaemonSet: -+ ----- -oc create -f sriov-device-plugin.yaml ----- - -. Create a YAML file for the `sriov-cni` DaemonSet with the following contents: -+ -[source,yaml,subs="attributes"] ----- -kind: DaemonSet -apiVersion: apps/v1 -metadata: - name: sriov-cni - namespace: openshift-sriov - annotations: - kubernetes.io/description: | - This daemon set launches the SR-IOV CNI plugin on SR-IOV capable worker nodes. -spec: - selector: - matchLabels: - app: sriov-cni - updateStrategy: - type: RollingUpdate - template: - metadata: - labels: - app: sriov-cni - component: network - type: infra - openshift.io/component: network - spec: - nodeSelector: - beta.kubernetes.io/os: linux - tolerations: - - operator: Exists - serviceAccountName: sriov-cni - containers: - - name: sriov-cni - image: quay.io/openshift/{image-prefix}-sriov-cni:v4.0.0 - securityContext: - privileged: true - volumeMounts: - - name: cnibin - mountPath: /host/opt/cni/bin - volumes: - - name: cnibin - hostPath: - path: /var/lib/cni/bin ----- - -. Run the following command to create the `sriov-cni` DaemonSet: -+ ----- -$ oc create -f sriov-cni.yaml ----- - -== Configuring additional interfaces using SR-IOV - -. Create a YAML file for the Custom Resource (CR) with SR-IOV configuration. The -`name` field in the following CR has the value `sriov-conf`. -+ -[source,yaml] ----- -apiVersion: "k8s.cni.cncf.io/v1" -kind: NetworkAttachmentDefinition -metadata: - name: sriov-conf - annotations: - k8s.v1.cni.cncf.io/resourceName: openshift.io/sriov <1> -spec: - config: '{ - "type": "sriov", <2> - "name": "sriov-conf", - "ipam": { - "type": "host-local", - "subnet": "10.56.217.0/24", - "routes": [{ - "dst": "0.0.0.0/0" - }], - "gateway": "10.56.217.1" - } - }' ----- -+ -<1> `k8s.v1.cni.cncf.io/resourceName` annotation is set to `openshift.io/sriov`. -<2> `type` is set to `sriov`. - -. Run the following command to create the `sriov-conf` CR: -+ ----- -$ oc create -f sriov-conf.yaml ----- - -. Create a YAML file for a Pod which references the name of the -`NetworkAttachmentDefinition` and requests one `openshift.io/sriov` resource: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: sriovsamplepod - annotations: - k8s.v1.cni.cncf.io/networks: sriov-conf -spec: - containers: - - name: sriovsamplepod - command: ["/bin/bash", "-c", "sleep 2000000000000"] - image: centos/tools - resources: - requests: - openshift.io/sriov: '1' - limits: - openshift.io/sriov: '1' ----- - -. Run the following command to create the `sriovsamplepod` Pod: -+ ----- -$ oc create -f sriovsamplepod.yaml ----- - -. View the additional interface by executing the `ip` command: -+ ----- -$ oc exec sriovsamplepod -- ip a ----- diff --git a/modules/nw-multitenant-global.adoc b/modules/nw-multitenant-global.adoc deleted file mode 100644 index 5a1f86f7fd2c..000000000000 --- a/modules/nw-multitenant-global.adoc +++ /dev/null @@ -1,26 +0,0 @@ -// Module included in the following assemblies: -// * networking/multitenant-isolation.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-multitenant-global_{context}"] -= Disabling network isolation for a project - -You can disable network isolation for a project. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* You must log in to the cluster with a user that has the `cluster-admin` role. - -.Procedure - -* Run the following command for the project: -+ -[source,terminal] ----- -$ oc adm pod-network make-projects-global ----- -+ -Alternatively, instead of specifying specific project names, you can use the -`--selector=` option to specify projects based upon an -associated label. diff --git a/modules/nw-multitenant-isolation.adoc b/modules/nw-multitenant-isolation.adoc deleted file mode 100644 index c7bb9217f718..000000000000 --- a/modules/nw-multitenant-isolation.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Module included in the following assemblies: -// * networking/multitenant-isolation.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-multitenant-isolation_{context}"] -= Isolating a project - -You can isolate a project so that pods and services in other projects cannot -access its pods and services. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* You must log in to the cluster with a user that has the `cluster-admin` role. - -.Procedure - -* To isolate the projects in the cluster, run the following command: -+ -[source,terminal] ----- -$ oc adm pod-network isolate-projects ----- -+ -Alternatively, instead of specifying specific project names, you can use the -`--selector=` option to specify projects based upon an -associated label. diff --git a/modules/nw-multitenant-joining.adoc b/modules/nw-multitenant-joining.adoc deleted file mode 100644 index 47a883dbb50b..000000000000 --- a/modules/nw-multitenant-joining.adoc +++ /dev/null @@ -1,37 +0,0 @@ -// Module included in the following assemblies: -// * networking/multitenant-isolation.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-multitenant-joining_{context}"] -= Joining projects - -You can join two or more projects to allow network traffic between pods and -services in different projects. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* You must log in to the cluster with a user that has the `cluster-admin` role. - -.Procedure - -. Use the following command to join projects to an existing project network: -+ -[source,terminal] ----- -$ oc adm pod-network join-projects --to= ----- -+ -Alternatively, instead of specifying specific project names, you can use the -`--selector=` option to specify projects based upon an -associated label. - -. Optional: Run the following command to view the pod networks that you have -joined together: -+ -[source,terminal] ----- -$ oc get netnamespaces ----- -+ -Projects in the same pod-network have the same network ID in the *NETID* column. diff --git a/modules/nw-ne-changes-externalip-ovn.adoc b/modules/nw-ne-changes-externalip-ovn.adoc deleted file mode 100644 index 715781025411..000000000000 --- a/modules/nw-ne-changes-externalip-ovn.adoc +++ /dev/null @@ -1,20 +0,0 @@ -// Module included in the following assemblies: -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: REFERENCE -[id="nw-ne-changes-externalip-ovn_{context}"] -= Understanding changes in external IP behavior with OVN-Kubernetes - -When migrating from OpenShift SDN to OVN-Kubernetes (OVN-K), services that use external IPs might become inaccessible across namespaces due to `NetworkPolicy` enforcement. - -In OpenShift SDN, external IPs were accessible across namespaces by default. However, in OVN-K, network policies strictly enforce multitenant isolation, preventing access to services exposed via external IPs from other namespaces. - -To ensure accessibility, consider the following alternatives: - -* Use an ingress or route: Instead of exposing services by using external IPs, configure an ingress or route to allow external access while maintaining security controls. - -* Adjust `NetworkPolicies`: Modify `NetworkPolicy` rules to explicitly allow access from required namespaces and ensure that traffic is allowed to the designated service ports. Without allowing traffic to the required ports, access might still be blocked, even if the namespace is explicitly allowed. - -* Use a `LoadBalancer` service: If applicable, deploy a `LoadBalancer` service instead of relying on external IPs. - -For more information on configuring NetworkPolicies, see "Configuring NetworkPolicies". diff --git a/modules/nw-ne-comparing-ingress-route.adoc b/modules/nw-ne-comparing-ingress-route.adoc deleted file mode 100644 index cc4def642704..000000000000 --- a/modules/nw-ne-comparing-ingress-route.adoc +++ /dev/null @@ -1,11 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -[id="nw-ne-comparing-ingress-route_{context}"] -= Comparing routes and Ingress -The Kubernetes Ingress resource in {product-title} implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. The most common way to manage Ingress traffic is with the Ingress Controller. You can scale and replicate this pod like any other regular pod. This router service is based on link:http://www.haproxy.org/[HAProxy], which is an open source load balancer solution. - -The {product-title} route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. - -Ingress traffic accesses services in the cluster through a route. Routes and Ingress are the main resources for handling Ingress traffic. Ingress provides features similar to a route, such as accepting external requests and delegating them based on the route. However, with Ingress you can only allow certain types of connections: HTTP/2, HTTPS and server name identification (SNI), and TLS with certificate. In {product-title}, routes are generated to meet the conditions specified by the Ingress resource. diff --git a/modules/nw-ne-openshift-dns.adoc b/modules/nw-ne-openshift-dns.adoc deleted file mode 100644 index 3bf03d6c308f..000000000000 --- a/modules/nw-ne-openshift-dns.adoc +++ /dev/null @@ -1,19 +0,0 @@ -// Module included in the following assemblies: -// * understanding-networking.adoc - - -[id="nw-ne-openshift-dns_{context}"] -= {product-title} DNS - -If you are running multiple services, such as front-end and back-end services for -use with multiple pods, environment variables are created for user names, -service IPs, and more so the front-end pods can communicate with the back-end -services. If the service is deleted and recreated, a new IP address can be -assigned to the service, and requires the front-end pods to be recreated to pick -up the updated values for the service IP environment variable. Additionally, the -back-end service must be created before any of the front-end pods to ensure that -the service IP is generated properly, and that it can be provided to the -front-end pods as an environment variable. - -For this reason, {product-title} has a built-in DNS so that the services can be -reached by the service DNS as well as the service IP/port. diff --git a/modules/nw-networking-glossary-terms.adoc b/modules/nw-networking-glossary-terms.adoc deleted file mode 100644 index f1b12665d6b7..000000000000 --- a/modules/nw-networking-glossary-terms.adoc +++ /dev/null @@ -1,118 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/understanding-networking.adoc - -:_mod-docs-content-type: REFERENCE -[id="nw-networking-glossary-terms_{context}"] -= Glossary of common terms for {product-title} networking - -This glossary defines common terms that are used in the networking content. - -authentication:: -To control access to an {product-title} cluster, a cluster administrator can configure user authentication and ensure only approved users access the cluster. To interact with an {product-title} cluster, you must authenticate to the {product-title} API. You can authenticate by providing an OAuth access token or an X.509 client certificate in your requests to the {product-title} API. - -AWS Load Balancer Operator:: -The AWS Load Balancer (ALB) Operator deploys and manages an instance of the `aws-load-balancer-controller`. - -Cluster Network Operator:: -The Cluster Network Operator (CNO) deploys and manages the cluster network components in an {product-title} cluster. This includes deployment of the Container Network Interface (CNI) network plugin selected for the cluster during installation. - -config map:: -A config map provides a way to inject configuration data into pods. You can reference the data stored in a config map in a volume of type `ConfigMap`. Applications running in a pod can use this data. - -custom resource (CR):: -A CR is extension of the Kubernetes API. You can create custom resources. - -DNS:: -Cluster DNS is a DNS server which serves DNS records for Kubernetes services. Containers started by Kubernetes automatically include this DNS server in their DNS searches. - -DNS Operator:: -The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods. This enables DNS-based Kubernetes Service discovery in {product-title}. - -deployment:: -A Kubernetes resource object that maintains the life cycle of an application. - -domain:: -Domain is a DNS name serviced by the Ingress Controller. - -egress:: -The process of data sharing externally through a network’s outbound traffic from a pod. - -External DNS Operator:: -The External DNS Operator deploys and manages ExternalDNS to provide the name resolution for services and routes from the external DNS provider to {product-title}. - -HTTP-based route:: -An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. - -Ingress:: -The Kubernetes Ingress resource in {product-title} implements the Ingress Controller with a shared router service that runs as a pod inside the cluster. - -Ingress Controller:: -The Ingress Operator manages Ingress Controllers. Using an Ingress Controller is the most common way to allow external access to an {product-title} cluster. - -installer-provisioned infrastructure:: -The installation program deploys and configures the infrastructure that the cluster runs on. - -kubelet:: -A primary node agent that runs on each node in the cluster to ensure that containers are running in a pod. - -Kubernetes NMState Operator:: -The Kubernetes NMState Operator provides a Kubernetes API for performing state-driven network configuration across the {product-title} cluster’s nodes with NMState. - -kube-proxy:: -Kube-proxy is a proxy service which runs on each node and helps in making services available to the external host. It helps in forwarding the request to correct containers and is capable of performing primitive load balancing. - -load balancers:: -{product-title} uses load balancers for communicating from outside the cluster with services running in the cluster. - -MetalLB Operator:: -As a cluster administrator, you can add the MetalLB Operator to your cluster so that when a service of type `LoadBalancer` is added to the cluster, MetalLB can add an external IP address for the service. - -multicast:: -With IP multicast, data is broadcast to many IP addresses simultaneously. - -namespaces:: -A namespace isolates specific system resources that are visible to all processes. Inside a namespace, only processes that are members of that namespace can see those resources. - -networking:: -Network information of a {product-title} cluster. - -node:: -A worker machine in the {product-title} cluster. A node is either a virtual machine (VM) or a physical machine. - -{product-title} Ingress Operator:: -The Ingress Operator implements the `IngressController` API and is the component responsible for enabling external access to {product-title} services. - -pod:: -One or more containers with shared resources, such as volume and IP addresses, running in your {product-title} cluster. -A pod is the smallest compute unit defined, deployed, and managed. - -PTP Operator:: -The PTP Operator creates and manages the `linuxptp` services. - -route:: -The {product-title} route provides Ingress traffic to services in the cluster. Routes provide advanced features that might not be supported by standard Kubernetes Ingress Controllers, such as TLS re-encryption, TLS passthrough, and split traffic for blue-green deployments. - -scaling:: -Increasing or decreasing the resource capacity. - -service:: -Exposes a running application on a set of pods. - -Single Root I/O Virtualization (SR-IOV) Network Operator:: -The Single Root I/O Virtualization (SR-IOV) Network Operator manages the SR-IOV network devices and network attachments in your cluster. - -software-defined networking (SDN):: -A software-defined networking (SDN) approach to provide a unified cluster network that enables communication between pods across the {product-title} cluster. - -Stream Control Transmission Protocol (SCTP):: -SCTP is a reliable message based protocol that runs on top of an IP network. - -taint:: -Taints and tolerations ensure that pods are scheduled onto appropriate nodes. You can apply one or more taints on a node. - -toleration:: -You can apply tolerations to pods. Tolerations allow the scheduler to schedule pods with matching taints. - -web console:: -A user interface (UI) to manage {product-title}. diff --git a/modules/nw-networkpolicy-optimize.adoc b/modules/nw-networkpolicy-optimize.adoc deleted file mode 100644 index c54f742829f0..000000000000 --- a/modules/nw-networkpolicy-optimize.adoc +++ /dev/null @@ -1,22 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/network_security/network_policy/about-network-policy.adoc - -[id="nw-networkpolicy-optimize-sdn_{context}"] -= Optimizations for network policy with OpenShift SDN - -Use a network policy to isolate pods that are differentiated from one another by labels within a namespace. - -It is inefficient to apply `NetworkPolicy` objects to large numbers of individual pods in a single namespace. Pod labels do not exist at the IP address level, so a network policy generates a separate Open vSwitch (OVS) flow rule for every possible link between every pod selected with a `podSelector`. - -For example, if the spec `podSelector` and the ingress `podSelector` within a `NetworkPolicy` object each match 200 pods, then 40,000 (200*200) OVS flow rules are generated. This might slow down a node. - -When designing your network policy, refer to the following guidelines: - -* Reduce the number of OVS flow rules by using namespaces to contain groups of pods that need to be isolated. -+ -`NetworkPolicy` objects that select a whole namespace, by using the `namespaceSelector` or an empty `podSelector`, generate only a single OVS flow rule that matches the VXLAN virtual network ID (VNID) of the namespace. - -* Keep the pods that do not need to be isolated in their original namespace, and move the pods that require isolation into one or more different namespaces. - -* Create additional targeted cross-namespace network policies to allow the specific traffic that you do want to allow from the isolated pods. diff --git a/modules/nw-pdncc-view.adoc b/modules/nw-pdncc-view.adoc deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/modules/nw-secondary-ext-gw-status.adoc b/modules/nw-secondary-ext-gw-status.adoc deleted file mode 100644 index 24c792804080..000000000000 --- a/modules/nw-secondary-ext-gw-status.adoc +++ /dev/null @@ -1,45 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/ovn_kubernetes_network_provider/configuring-secondary-external-gateway.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-secondary-ext-gw-status_{context}"] -= View the status of an external gateway - -You can view the status of an external gateway that is configured for your cluster. The `status` field for the `AdminPolicyBasedExternalRoute` custom resource reports recent status messages whenever you update the resource, subject to a few limitations: - -- Namespaces impacted are not reported in status messages -- Pods selected as part of a dynamic next hop configuration do not trigger status updates as a result of pod lifecycle events, such as pod termination - -.Prerequisites - -* You installed the OpenShift CLI (`oc`). -* You are logged in to the cluster with a user with `cluster-admin` privileges. - -.Procedure - -* To access the status logs for a secondary external gateway, enter the following command: -+ -[source,terminal] ----- -$ oc get adminpolicybasedexternalroutes -o yaml ----- -+ --- -where: - -``:: Specifies the name of an `AdminPolicyBasedExternalRoute` object. --- -+ -.Example output -[source,text] ----- -... -Status: - Last Transition Time: 2023-04-24T14:49:45Z - Messages: - Configured external gateway IPs: 172.18.0.8,172.18.0.9 - Configured external gateway IPs: 172.18.0.8 - Status: Success -Events: ----- diff --git a/modules/nw-sriov-about-all-multi-cast_mode.adoc b/modules/nw-sriov-about-all-multi-cast_mode.adoc deleted file mode 100644 index 39598acaffbc..000000000000 --- a/modules/nw-sriov-about-all-multi-cast_mode.adoc +++ /dev/null @@ -1,20 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/hardware_networks/configuring-interface-sysctl-sriov-device.adoc - -:_mod-docs-content-type: CONCEPT -[id="nw-about-all-one-sysctl-flag_{context}"] -= Setting one sysctl flag - -You can set interface-level network `sysctl` settings for a pod connected to a SR-IOV network device. - -In this example, `net.ipv4.conf.IFNAME.accept_redirects` is set to `1` on the created virtual interfaces. - -The `sysctl-tuning-test` namespace is used in this example. - -* Use the following command to create the `sysctl-tuning-test` namespace: -+ ----- -$ oc create namespace sysctl-tuning-test ----- - diff --git a/modules/nw-udn-examples.adoc b/modules/nw-udn-examples.adoc deleted file mode 100644 index 5e9409ed69de..000000000000 --- a/modules/nw-udn-examples.adoc +++ /dev/null @@ -1,67 +0,0 @@ -//module included in the following assembly: -// -// * networking/multiple_networks/primary_networks/about-user-defined-networks.adoc - -:_mod-docs-content-type: REFERENCE -[id="nw-udn-examples_{context}"] -= Configuration details and examples of UserDefinedNetworks - -The following sections includes configuration details and examples for creating user-defined networks (UDN) using the custom resource definition. - -[id=configuration-details-layer-two_{context}] -== Configuration details for Layer2 topology -The following rules apply when creating a UDN with a `Layer2` topology: - -* The `subnets` field is optional. -* The `subnets` field is of type `string` and accepts standard CIDR formats for both IPv4 and IPv6. -* The `subnets` field accepts one or two items. For two items, they must be of a different family. For example, `subnets` values of `10.100.0.0/16` and `2001:db8::/64`. -* `Layer2` subnets may be omitted. If omitted, users must configure IP addresses for the pods. As a consequence, port security only prevents MAC spoofing. -* The `Layer2` `subnets` field is mandatory when `ipamLifecycle` is specified. - -.Example of UDN over `Layer2` topology -[%collapsible] -==== -[source,terminal] ----- -apiVersion: k8s.ovn.org/v1 -kind: UserDefinedNetwork -metadata: - name: udn-network-primary - namespace: -spec: - topology: Layer2 - layer2: - role: Primary - subnets: ["10.150.0.0/16"] ----- -==== - -[id=configuration-details-layer-three_{context}] -== Configuration details for Layer3 topology -The following rules apply when creating a UDN with a `Layer3` topology: - -* The `subnets` field is mandatory. -* The type for `subnets` field is `cidr` and `hostsubnet`: -+ -** `cidr` is the cluster subnet and accepts a string value. -** `hostSubnet` specifies the nodes subnet prefix that the cluster subnet is split to. - -.Example of UDN over `Layer3` topology -[%collapsible] -==== -[source,terminal] ----- -apiVersion: k8s.ovn.org/v1 -kind: UserDefinedNetwork -metadata: - name: udn-network-primary - namespace: -spec: - topology: Layer3 - layer3: - role: Primary - subnets: - - cidr: 10.150.0.0/16 - hostsubnet: 24 ----- -==== \ No newline at end of file diff --git a/modules/patching-ovnk-address-ranges.adoc b/modules/patching-ovnk-address-ranges.adoc deleted file mode 100644 index 6b105cebd425..000000000000 --- a/modules/patching-ovnk-address-ranges.adoc +++ /dev/null @@ -1,43 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/ovn_kubernetes_network_provider/migrate-from-openshift-sdn.adoc - -:_mod-docs-content-type: PROCEDURE -[id="patching-ovnk-address-ranges_{context}"] -= Patching OVN-Kubernetes address ranges - -OVN-Kubernetes reserves the following IP address ranges: - -* `100.64.0.0/16`. This IP address range is used for the `internalJoinSubnet` parameter of OVN-Kubernetes by default. -* `100.88.0.0/16`. This IP address range is used for the `internalTransSwitchSubnet` parameter of OVN-Kubernetes by default. - -If these IP addresses have been used by OpenShift SDN or any external networks that might communicate with this cluster, you must patch them to use a different IP address range before initiating the limited live migration. - -The following procedure can be used to patch CIDR ranges that are in use by OpenShift SDN if the migration was initially blocked. - -[NOTE] -==== -Only use this optional procedure if your cluster or network environment overlaps with the `100.64.0.0/16` subnet or the `100.88.0.0/16` subnet. Ensure that you run the steps in the procedure before you start the limited live migration operation. -==== - -.Prerequisites - -* You have access to the cluster as a user with the `cluster-admin` role. - -.Procedure - -. If the `100.64.0.0/16` IP address range is already in use, enter the following command to patch it to a different range. The following example uses `100.63.0.0/16`. -+ -[source,terminal] ----- -$ oc patch network.operator.openshift.io cluster --type='merge' -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipv4":{"internalJoinSubnet": "100.63.0.0/16"}}}}}' ----- - -. If the `100.88.0.0/16` IP address range is already in use, enter the following command to patch it to a different range. The following example uses `100.99.0.0/16`. -+ -[source,terminal] ----- -$ oc patch network.operator.openshift.io cluster --type='merge' -p='{"spec":{"defaultNetwork":{"ovnKubernetesConfig":{"ipv4":{"internalTransitSwitchSubnet": "100.99.0.0/16"}}}}}' ----- - -After patching the `100.64.0.0/16` and `100.88.0.0/16` IP address ranges, you can initiate the limited live migration. \ No newline at end of file diff --git a/modules/persistent-storage-csi-cloning-using.adoc b/modules/persistent-storage-csi-cloning-using.adoc deleted file mode 100644 index e28e7e1e185e..000000000000 --- a/modules/persistent-storage-csi-cloning-using.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * storage/container_storage_interface/persistent-storage-csi-cloning.adoc - -[id="persistent-storage-csi-cloning-using_{context}"] -= Using a cloned PVC as a storage volume - -A newly cloned persistent volume claim (PVC) can be consumed, cloned, snapshotted, or deleted independently of its original `dataSource` PVC. - -Pods can access storage by using the cloned PVC as a volume. For example: - -.Use CSI volume clone in the Pod -[source,yaml] ----- -kind: Pod -apiVersion: v1 -metadata: - name: mypod -spec: - containers: - - name: myfrontend - image: dockerfile/nginx - volumeMounts: - - mountPath: "/var/www/html" - name: mypd - volumes: - - name: mypd - persistentVolumeClaim: - claimName: pvc-1-clone <1> ----- - -<1> The cloned PVC created during the CSI volume cloning operation. diff --git a/modules/preferred-ibm-z-system-requirements.adoc b/modules/preferred-ibm-z-system-requirements.adoc deleted file mode 100644 index 0600387afbb5..000000000000 --- a/modules/preferred-ibm-z-system-requirements.adoc +++ /dev/null @@ -1,104 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_ibm_z/installing-ibm-z.adoc -// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc -// * installing/installing_ibm_z/installing-ibm-z-lpar.adoc -// * installing/installing_ibm_z/installing-restricted-networks-ibm-z-lpar.adoc - -ifeval::["{context}" == "installing-ibm-z"] -:ibm-z: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:ibm-z: -endif::[] -ifeval::["{context}" == "installing-ibm-z-lpar"] -:ibm-z-lpar: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z-lpar"] -:ibm-z-lpar: -endif::[] - -:_mod-docs-content-type: CONCEPT -[id="preferred-ibm-z-system-requirements_{context}"] -= Preferred {ibm-z-title} system environment - - -== Hardware requirements - -* Three LPARS that each have the equivalent of six IFLs, which are SMT2 enabled, for each cluster. -* Two network connections to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster. -ifdef::ibm-z[] -* HiperSockets that are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a {op-system-base} 8 guest to bridge to the HiperSockets network. -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* HiperSockets that are attached to a node directly as a device. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a {op-system-base} 8 guest to bridge to the HiperSockets network. -endif::ibm-z-lpar[] - - -== Operating system requirements - -ifdef::ibm-z[] -* Two or three instances of z/VM 7.2 or later for high availability - -On your z/VM instances, set up: - -* Three guest virtual machines for {product-title} control plane machines, one per z/VM instance. -* At least six guest virtual machines for {product-title} compute machines, distributed across the z/VM instances. -* One guest virtual machine for the temporary {product-title} bootstrap machine. -* To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command `SET SHARE`. Do the same for infrastructure nodes, if they exist. See link:https://www.ibm.com/docs/en/zvm/latest?topic=commands-set-share[SET SHARE] in {ibm-name} Documentation. -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* Three LPARs for {product-title} control plane machines. -* At least six LPARs for {product-title} compute machines. -* One machine or LPAR for the temporary {product-title} bootstrap machine. -endif::ibm-z-lpar[] - - -== {ibm-z-title} network connectivity requirements - -ifdef::ibm-z[] -To install on {ibm-z-name} under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need: - -* A direct-attached OSA or RoCE network adapter -* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation. -endif::ibm-z[] -ifdef::ibm-z-lpar[] -To install on {ibm-z-name} in an LPAR, you need: - -* A direct-attached OSA or RoCE network adapter -* For a preferred setup, use OSA link aggregation. -endif::ibm-z-lpar[] - - -=== Disk storage - -ifdef::ibm-z[] -* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. -* FCP attached disk storage -endif::ibm-z[] -ifdef::ibm-z-lpar[] -* FICON attached disk storage (DASDs). These can be dedicated DASDs that must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance. -* FCP attached disk storage -* NVMe disk storage -endif::ibm-z-lpar[] - - -=== Storage / Main Memory - -* 16 GB for {product-title} control plane machines -* 8 GB for {product-title} compute machines -* 16 GB for the temporary {product-title} bootstrap machine - -ifeval::["{context}" == "installing-ibm-z"] -:!ibm-z: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:!ibm-z: -endif::[] -ifeval::["{context}" == "installing-ibm-z-lpar"] -:!ibm-z-lpar: -endif::[] -ifeval::["{context}" == "installing-restricted-networks-ibm-z"] -:!ibm-z-lpar: -endif::[] - diff --git a/modules/providing-direct-documentation-feedback.adoc b/modules/providing-direct-documentation-feedback.adoc deleted file mode 100644 index b4bd0eab4840..000000000000 --- a/modules/providing-direct-documentation-feedback.adoc +++ /dev/null @@ -1,24 +0,0 @@ -:_module-type: CONCEPT - -[id="providing-direct-documentation-feedback_{context}"] -= Providing feedback on Red Hat documentation - -[role="_abstract"] -We appreciate your feedback on our technical content and encourage you to tell us what you think. -If you'd like to add comments, provide insights, correct a typo, or even ask a question, you can do so directly in the documentation. - -[NOTE] -==== -You must have a Red Hat account and be logged in to the customer portal. -==== - -To submit documentation feedback from the customer portal, do the following: - -. Select the *Multi-page HTML* format. -. Click the *Feedback* button at the top-right of the document. -. Highlight the section of text where you want to provide feedback. -. Click the *Add Feedback* dialog next to your highlighted text. -. Enter your feedback in the text box on the right of the page and then click *Submit*. - -We automatically create a tracking issue each time you submit feedback. -Open the link that is displayed after you click *Submit* and start watching the issue or add more comments. diff --git a/modules/rbac-updating-policy-definitions.adoc b/modules/rbac-updating-policy-definitions.adoc deleted file mode 100644 index 1a2e45a62e90..000000000000 --- a/modules/rbac-updating-policy-definitions.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * orphaned - -ifdef::openshift-enterprise,openshift-webscale,openshift-origin[] -[id="updating-policy-definitions_{context}"] -= Updating policy definitions - -During a cluster upgrade, and on every restart of any master, the -default cluster roles are automatically reconciled to restore any missing permissions. - -If you customized default cluster roles and want to ensure a role reconciliation -does not modify them, you must take the following actions. - -.Procedure - -. Protect each role from reconciliation: -+ ----- -$ oc annotate clusterrole.rbac --overwrite rbac.authorization.kubernetes.io/autoupdate=false ----- -+ -[WARNING] -==== -You must manually update the roles that contain this setting to include any new -or required permissions after upgrading. -==== - -. Generate a default bootstrap policy template file: -+ ----- -$ oc adm create-bootstrap-policy-file --filename=policy.json ----- -+ -[NOTE] -==== -The contents of the file vary based on the {product-title} version, but the file -contains only the default policies. -==== - -. Update the *_policy.json_* file to include any cluster role customizations. - -. Use the policy file to automatically reconcile roles and role bindings that -are not reconcile protected: -+ ----- -$ oc auth reconcile -f policy.json ----- - -. Reconcile Security Context Constraints: -+ ----- -# oc adm policy reconcile-sccs \ - --additive-only=true \ - --confirm ----- -endif::[] diff --git a/modules/running-modified-installation.adoc b/modules/running-modified-installation.adoc deleted file mode 100644 index f2c75c4d0d68..000000000000 --- a/modules/running-modified-installation.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="running-modified-installation_{context}"] -= Running a modified {product-title} installation - -Running a default {product-title} {product-version} cluster is the best way to ensure that the {product-title} cluster you get will be easy to install, maintain, and upgrade going forward. However, because you may want to add to or change your {product-title} cluster, openshift-install offers several ways to modify the default installation or add to it later. These include: - -* Creating an install-config file: Changing the contents of the install-config file, to identify things like the cluster name and credentials, is fully supported. -* Creating ignition-config files: Viewing ignition-config files, which define how individual nodes are configured when they are first deployed, is fully supported. However, changing those files is not supported. -* Creating Kubernetes (manifests) and {product-title} (openshift) manifest files: You can view manifest files in the manifests and openshift directories to see how Kubernetes and {product-title} features are configured, respectively. Changing those files is not supported. - -Whether you want to change your {product-title} installation or simply gain a deeper understanding of the details of the installation process, the goal of this section is to step you through an {product-title} installation. Along the way, it covers: - -* The underlying activities that go on under the covers to bring up an {product-title} cluster -* Major components that are leveraged ({op-system}, Ignition, Terraform, and so on) -* Opportunities to customize the install process (install configs, Ignition configs, manifests, and so on) diff --git a/modules/service-accounts-adding-secrets.adoc b/modules/service-accounts-adding-secrets.adoc deleted file mode 100644 index 11d925ea62c7..000000000000 --- a/modules/service-accounts-adding-secrets.adoc +++ /dev/null @@ -1,70 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/using-service-accounts.adoc - -[id="service-accounts-managing-secrets_{context}"] -== Managing secrets on a service account's pod - -In addition to providing API credentials, a pod's service account determines -which secrets the pod is allowed to use. - -Pods use secrets in two ways: - -* image pull secrets, providing credentials used to pull images for the pod's containers -* mountable secrets, injecting the contents of secrets into containers as files - -To allow a secret to be used as an image pull secret by a service account's -pods, run: - ----- -$ oc secrets link --for=pull ----- - -To allow a secret to be mounted by a service account's pods, run: - ----- -$ oc secrets link --for=mount ----- - -[NOTE] -==== -Limiting secrets to only the service accounts that reference them is disabled by -default. This means that if `serviceAccountConfig.limitSecretReferences` is set -to `false` (the default setting) in the master configuration file, mounting -secrets to a service account's pods with the `--for=mount` option is not -required. However, using the `--for=pull` option to enable using an image pull -secret is required, regardless of the -`serviceAccountConfig.limitSecretReferences` value. -==== - -This example creates and adds secrets to a service account: - ----- -$ oc create secret generic secret-plans \ - --from-file=plan1.txt \ - --from-file=plan2.txt -secret/secret-plans - -$ oc create secret docker-registry my-pull-secret \ - --docker-username=mastermind \ - --docker-password=12345 \ - --docker-email=mastermind@example.com -secret/my-pull-secret - -$ oc secrets link robot secret-plans --for=mount - -$ oc secrets link robot my-pull-secret --for=pull - -$ oc describe serviceaccount robot -Name: robot -Labels: -Image pull secrets: robot-dockercfg-624cx - my-pull-secret - -Mountable secrets: robot-token-uzkbh - robot-dockercfg-624cx - secret-plans - -Tokens: robot-token-8bhpp - robot-token-uzkbh ----- diff --git a/modules/service-accounts-managing-secrets.adoc b/modules/service-accounts-managing-secrets.adoc deleted file mode 100644 index cae0fb9bf790..000000000000 --- a/modules/service-accounts-managing-secrets.adoc +++ /dev/null @@ -1,65 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/using-service-accounts.adoc - -[id="service-accounts-managing-secrets_{context}"] -= Managing allowed secrets - -You can use the service account's secrets in your application's pods for: - -* Image pull secrets, providing credentials used to pull images for the pod's containers -* Mountable secrets, injecting the contents of secrets into containers as files - -.Procedure - -. Create a secret: -+ ----- -$ oc create secret generic \ - --from-file=.txt - -secret/ ----- - -. To allow a secret to be used as an image pull secret by a service account's -pods, run: -+ ----- -$ oc secrets link --for=pull ----- - -. To allow a secret to be mounted by a service account's pods, run: -+ ----- -$ oc secrets link --for=mount ----- - -. Confirm that the secret was added to the service account: -+ ----- -$ oc describe serviceaccount -Name: -Labels: -Image pull secrets: robot-dockercfg-624cx - my-pull-secret - -Mountable secrets: robot-token-uzkbh - robot-dockercfg-624cx - secret-plans - -Tokens: robot-token-8bhpp - robot-token-uzkbh ----- - -//// -[NOTE] -==== -Limiting secrets to only the service accounts that reference them is disabled by -default. This means that if `serviceAccountConfig.limitSecretReferences` is set -to `false` (the default setting) in the master configuration file, mounting -secrets to a service account's pods with the `--for=mount` option is not -required. However, using the `--for=pull` option to enable using an image pull -secret is required, regardless of the -`serviceAccountConfig.limitSecretReferences` value. -==== -//// diff --git a/modules/sts-mode-installing-manual-run-installer.adoc b/modules/sts-mode-installing-manual-run-installer.adoc deleted file mode 100644 index e81bc3c9cc63..000000000000 --- a/modules/sts-mode-installing-manual-run-installer.adoc +++ /dev/null @@ -1,66 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/managing_cloud_provider_credentials/cco-mode-sts.adoc -// * authentication/managing_cloud_provider_credentials/cco-mode-gcp-workload-identity.adoc - -:_mod-docs-content-type: PROCEDURE -[id="sts-mode-installing-manual-run-installer_{context}"] -= Running the installer - -.Prerequisites - -* Configure an account with the cloud platform that hosts your cluster. -* Obtain the {product-title} release image. - -.Procedure - -. Change to the directory that contains the installation program and create the `install-config.yaml` file: -+ -[source,terminal] ----- -$ openshift-install create install-config --dir ----- -+ -where `` is the directory in which the installation program creates files. - -. Edit the `install-config.yaml` configuration file so that it contains the `credentialsMode` parameter set to `Manual`. -+ -.Example `install-config.yaml` configuration file -[source,yaml] ----- -apiVersion: v1 -baseDomain: cluster1.example.com -credentialsMode: Manual <1> -compute: -- architecture: amd64 - hyperthreading: Enabled ----- -<1> This line is added to set the `credentialsMode` parameter to `Manual`. - -. Create the required {product-title} installation manifests: -+ -[source,terminal] ----- -$ openshift-install create manifests ----- - -. Copy the manifests that `ccoctl` generated to the manifests directory that the installation program created: -+ -[source,terminal,subs="+quotes"] ----- -$ cp //manifests/* ./manifests/ ----- - -. Copy the `tls` directory containing the private key that the `ccoctl` generated to the installation directory: -+ -[source,terminal,subs="+quotes"] ----- -$ cp -a //tls . ----- - -. Run the {product-title} installer: -+ -[source,terminal] ----- -$ ./openshift-install create cluster ----- diff --git a/modules/understanding-installation.adoc b/modules/understanding-installation.adoc deleted file mode 100644 index dbd19c82853d..000000000000 --- a/modules/understanding-installation.adoc +++ /dev/null @@ -1,8 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="understanding-installation_{context}"] -= Understanding {product-title} installation - -{product-title} installation is designed to quickly spin up an {product-title} cluster, with the user starting the cluster required to provide as little information as possible. diff --git a/modules/updating-troubleshooting-clear.adoc b/modules/updating-troubleshooting-clear.adoc deleted file mode 100644 index 8cbda3c63614..000000000000 --- a/modules/updating-troubleshooting-clear.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module included in the following assemblies: -// -// * updating/troubleshooting_updates/recovering-update-before-applied.adoc - -[id="updating-troubleshooting-clear_{context}"] -= Recovering when an update fails before it is applied - -If an update fails before it is applied, such as when the version that you specify cannot be found, you can cancel the update: - -[source,terminal] ----- -$ oc adm upgrade --clear ----- - -[IMPORTANT] -==== -If an update fails at any other point, you must contact Red Hat support. Rolling your cluster back to a previous version is not supported. -==== \ No newline at end of file diff --git a/modules/virt-importing-vm-wizard.adoc b/modules/virt-importing-vm-wizard.adoc deleted file mode 100644 index a8b119dcc336..000000000000 --- a/modules/virt-importing-vm-wizard.adoc +++ /dev/null @@ -1,150 +0,0 @@ -// Module included in the following assemblies: -// -// * virt/virtual_machines/importing_vms/virt-importing-vmware-vm.adoc -// * virt/virtual_machines/importing_vms/virt-importing-rhv-vm.adoc - -[id="virt-importing-vm-wizard_{context}"] -= Importing a virtual machine with the VM Import wizard - -You can import a single virtual machine with the VM Import wizard. - -ifdef::virt-importing-vmware-vm[] -You can also import a VM template. If you import a VM template, {VirtProductName} creates a virtual machine based on the template. - -.Prerequisites - -* You must have admin user privileges. -* The VMware Virtual Disk Development Kit (VDDK) image must be in an image registry that is accessible to your {VirtProductName} environment. -* The VDDK image must be added to the `spec.vddkInitImage` field of the `HyperConverged` custom resource (CR). -* The VM must be powered off. -* Virtual disks must be connected to IDE or SCSI controllers. If virtual disks are connected to a SATA controller, you can change them to IDE controllers and then migrate the VM. -* The {VirtProductName} local and shared persistent storage classes must support VM import. -* The {VirtProductName} storage must be large enough to accommodate the virtual disk. -+ -[WARNING] -==== -If you are using Ceph RBD block-mode volumes, the storage must be large enough to accommodate the virtual disk. If the disk is too large for the available storage, the import process fails and the PV that is used to copy the virtual disk is not released. You will not be able to import another virtual machine or to clean up the storage because there are insufficient resources to support object deletion. To resolve this situation, you must add more object storage devices to the storage back end. -==== - -* The {VirtProductName} egress network policy must allow the following traffic: -+ -[cols="1,1,1" options="header"] -|=== -|Destination |Protocol |Port -|VMware ESXi hosts |TCP |443 -|VMware ESXi hosts |TCP |902 -|VMware vCenter |TCP |5840 -|=== -endif::[] - -.Procedure - -. In the web console, click *Workloads* -> *Virtual Machines*. -. Click *Create Virtual Machine* and select *Import with Wizard*. -ifdef::virt-importing-vmware-vm[] -. Select *VMware* from the *Provider* list. -. Select *Connect to New Instance* or a saved vCenter instance. - -* If you select *Connect to New Instance*, enter the *vCenter hostname*, *Username*, and *Password*. -* If you select a saved vCenter instance, the wizard connects to the vCenter instance using the saved credentials. - -. Click *Check and Save* and wait for the connection to complete. -+ -[NOTE] -==== -The connection details are stored in a secret. If you add a provider with an incorrect hostname, user name, or password, click *Workloads* -> *Secrets* and delete the provider secret. -==== - -. Select a virtual machine or a template. -endif::[] -ifdef::virt-importing-rhv-vm[] -. Select *Red Hat Virtualization (RHV)* from the *Provider* list. -. Select *Connect to New Instance* or a saved RHV instance. - -* If you select *Connect to New Instance*, fill in the following fields: - -** *API URL*: For example, `\https:///ovirt-engine/api` -** *CA certificate*: Click *Browse* to upload the RHV Manager CA certificate or paste the CA certificate into the field. -+ -View the CA certificate by running the following command: -+ -[source,terminal] ----- -$ openssl s_client -connect :443 -showcerts < /dev/null ----- -+ -The CA certificate is the second certificate in the output. - -** *Username*: RHV Manager user name, for example, `ocpadmin@internal` -** *Password*: RHV Manager password - -* If you select a saved RHV instance, the wizard connects to the RHV instance using the saved credentials. - -. Click *Check and Save* and wait for the connection to complete. -+ -[NOTE] -==== -The connection details are stored in a secret. If you add a provider with an incorrect URL, user name, or password, click *Workloads* -> *Secrets* and delete the provider secret. -==== - -. Select a cluster and a virtual machine. -endif::[] -. Click *Next*. -. In the *Review* screen, review your settings. -// RHV import options -ifdef::virt-importing-rhv-vm[] -. Optional: You can select *Start virtual machine on creation*. -endif::[] - -. Click *Edit* to update the following settings: - -ifdef::virt-importing-rhv-vm[] -* *General* -> *Name*: The VM name is limited to 63 characters. -* *General* -> *Description*: Optional description of the VM. -** *Storage Class*: Select *NFS* or *ocs-storagecluster-ceph-rbd*. -+ -If you select *ocs-storagecluster-ceph-rbd*, you must set the *Volume Mode* of the disk to *Block*. - -** *Advanced* -> *Volume Mode*: Select *Block*. -* *Advanced* -> *Volume Mode*: Select *Block*. -* *Networking* -> *Network*: You can select a network from a list of available network attachment definition objects. -endif::[] -ifdef::virt-importing-vmware-vm[] -* *General*: -** *Description* -** *Operating System* -** *Flavor* -** *Memory* -** *CPUs* -** *Workload Profile* - -* *Networking*: -** *Name* -** *Model* -** *Network* -** *Type* -** *MAC Address* - -* *Storage*: Click the Options menu {kebab} of the VM disk and select *Edit* to update the following fields: -** *Name* -** *Source*: For example, *Import Disk*. -** *Size* -** *Interface* -** *Storage Class*: Select *NFS* or *ocs-storagecluster-ceph-rbd (ceph-rbd)*. -+ -If you select *ocs-storagecluster-ceph-rbd*, you must set the *Volume Mode* of the disk to *Block*. -+ -Other storage classes might work, but they are not officially supported. - -** *Advanced* -> *Volume Mode*: Select *Block*. -** *Advanced* -> *Access Mode* - -* *Advanced* -> *Cloud-init*: -** *Form*: Enter the *Hostname* and *Authenticated SSH Keys*. -** *Custom script*: Enter the `cloud-init` script in the text field. - -* *Advanced* -> *Virtual Hardware*: You can attach a virtual CD-ROM to the imported virtual machine. -endif::[] -. Click *Import* or *Review and Import*, if you have edited the import settings. -+ -A *Successfully created virtual machine* message and a list of resources created for the virtual machine are displayed. The virtual machine appears in *Workloads* -> *Virtual Machines*.