From a17a654ef4f3483429514a833c2238fa0709752f Mon Sep 17 00:00:00 2001 From: Max Bridges Date: Wed, 15 Oct 2025 22:23:53 -0400 Subject: [PATCH] remove unused files --- ...annotating-a-route-with-a-cookie-name.adoc | 37 -- modules/builds-output-image-digest.adoc | 32 -- ...-term-creds-component-permissions-gcp.adoc | 9 - modules/completing-installation.adoc | 33 -- modules/configmap-create.adoc | 17 - modules/configmap-overview.adoc | 62 --- modules/configuration-resource-overview.adoc | 68 ---- ...nfiguring-layer-three-routed-topology.adoc | 32 -- modules/cpmso-feat-vertical-resize.adoc | 7 - modules/creating-your-first-content.adoc | 109 ----- ...ging-deploying-storage-considerations.adoc | 108 ----- modules/feature-gate-features.adoc | 34 -- ...nifest-list-through-imagestreamimport.adoc | 44 -- ...-on-a-single-node-on-a-cloud-provider.adoc | 18 - modules/installation-about-custom.adoc | 52 --- ...tallation-azure-finalizing-encryption.adoc | 155 ------- ...stallation-creating-worker-machineset.adoc | 144 ------- .../installation-gcp-shared-vpc-ingress.adoc | 49 --- ...ation-localzone-generate-k8s-manifest.adoc | 195 --------- ...allation-osp-balancing-external-loads.adoc | 121 ------ ...-osp-creating-sr-iov-compute-machines.adoc | 196 --------- ...nstallation-osp-kuryr-settings-active.adoc | 52 --- ...tallation-osp-setting-worker-affinity.adoc | 117 ------ modules/installation-osp-troubleshooting.adoc | 40 -- ...alling-gitops-operator-in-web-console.adoc | 18 - .../installing-gitops-operator-using-cli.adoc | 60 --- ...ll-configuring-the-metal3-config-file.adoc | 52 --- ...eating-logical-volume-manager-cluster.adoc | 218 ---------- ...itations-to-configure-size-of-devices.adoc | 48 --- modules/machine-configs-and-pools.adoc | 75 ---- modules/machineset-osp-adding-bare-metal.adoc | 95 ----- .../metering-cluster-capacity-examples.adoc | 48 --- modules/metering-cluster-usage-examples.adoc | 27 -- ...metering-cluster-utilization-examples.adoc | 26 -- .../metering-configure-persistentvolumes.adoc | 57 --- modules/metering-debugging.adoc | 228 ----------- .../metering-exposing-the-reporting-api.adoc | 159 -------- modules/metering-install-operator.adoc | 133 ------ modules/metering-install-prerequisites.adoc | 13 - modules/metering-install-verify.adoc | 95 ----- modules/metering-overview.adoc | 33 -- modules/metering-prometheus-connection.adoc | 55 --- modules/metering-reports.adoc | 381 ------------------ modules/metering-store-data-in-azure.adoc | 57 --- modules/metering-store-data-in-gcp.adoc | 53 --- .../metering-store-data-in-s3-compatible.adoc | 48 --- modules/metering-store-data-in-s3.adoc | 136 ------- ...metering-store-data-in-shared-volumes.adoc | 150 ------- modules/metering-troubleshooting.adoc | 195 --------- modules/metering-uninstall-crds.adoc | 28 -- modules/metering-uninstall.adoc | 36 -- ...ring-use-mysql-or-postgresql-for-hive.adoc | 89 ---- modules/metering-viewing-report-results.adoc | 103 ----- modules/metering-writing-reports.adoc | 73 ---- modules/mod-docs-ocp-conventions.adoc | 154 ------- ...ulti-architecture-scheduling-overview.adoc | 13 - modules/nbde-managing-encryption-keys.adoc | 10 - modules/nw-multinetwork-sriov.adoc | 314 --------------- modules/nw-pdncc-view.adoc | 0 ...od-network-connectivity-configuration.adoc | 48 --- modules/nw-secondary-ext-gw-status.adoc | 45 --- .../nw-sriov-about-all-multi-cast_mode.adoc | 21 - .../persistent-storage-csi-cloning-using.adoc | 32 -- ...oviding-direct-documentation-feedback.adoc | 24 -- modules/rbac-updating-policy-definitions.adoc | 57 --- modules/running-modified-installation.adoc | 18 - modules/service-accounts-adding-secrets.adoc | 70 ---- .../service-accounts-managing-secrets.adoc | 65 --- ...-mode-installing-manual-run-installer.adoc | 66 --- modules/understanding-installation.adoc | 8 - modules/updating-troubleshooting-clear.adoc | 18 - modules/virt-early-access-releases.adoc | 18 - modules/virt-importing-vm-wizard.adoc | 150 ------- 73 files changed, 5651 deletions(-) delete mode 100644 modules/annotating-a-route-with-a-cookie-name.adoc delete mode 100644 modules/builds-output-image-digest.adoc delete mode 100644 modules/cco-short-term-creds-component-permissions-gcp.adoc delete mode 100644 modules/completing-installation.adoc delete mode 100644 modules/configmap-create.adoc delete mode 100644 modules/configmap-overview.adoc delete mode 100644 modules/configuration-resource-overview.adoc delete mode 100644 modules/configuring-layer-three-routed-topology.adoc delete mode 100644 modules/cpmso-feat-vertical-resize.adoc delete mode 100644 modules/creating-your-first-content.adoc delete mode 100644 modules/efk-logging-deploying-storage-considerations.adoc delete mode 100644 modules/feature-gate-features.adoc delete mode 100644 modules/importing-manifest-list-through-imagestreamimport.adoc delete mode 100644 modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc delete mode 100644 modules/installation-about-custom.adoc delete mode 100644 modules/installation-azure-finalizing-encryption.adoc delete mode 100644 modules/installation-creating-worker-machineset.adoc delete mode 100644 modules/installation-gcp-shared-vpc-ingress.adoc delete mode 100644 modules/installation-localzone-generate-k8s-manifest.adoc delete mode 100644 modules/installation-osp-balancing-external-loads.adoc delete mode 100644 modules/installation-osp-creating-sr-iov-compute-machines.adoc delete mode 100644 modules/installation-osp-kuryr-settings-active.adoc delete mode 100644 modules/installation-osp-setting-worker-affinity.adoc delete mode 100644 modules/installation-osp-troubleshooting.adoc delete mode 100644 modules/installing-gitops-operator-in-web-console.adoc delete mode 100644 modules/installing-gitops-operator-using-cli.adoc delete mode 100644 modules/ipi-install-configuring-the-metal3-config-file.adoc delete mode 100644 modules/lvms-creating-logical-volume-manager-cluster.adoc delete mode 100644 modules/lvms-limitations-to-configure-size-of-devices.adoc delete mode 100644 modules/machine-configs-and-pools.adoc delete mode 100644 modules/machineset-osp-adding-bare-metal.adoc delete mode 100644 modules/metering-cluster-capacity-examples.adoc delete mode 100644 modules/metering-cluster-usage-examples.adoc delete mode 100644 modules/metering-cluster-utilization-examples.adoc delete mode 100644 modules/metering-configure-persistentvolumes.adoc delete mode 100644 modules/metering-debugging.adoc delete mode 100644 modules/metering-exposing-the-reporting-api.adoc delete mode 100644 modules/metering-install-operator.adoc delete mode 100644 modules/metering-install-prerequisites.adoc delete mode 100644 modules/metering-install-verify.adoc delete mode 100644 modules/metering-overview.adoc delete mode 100644 modules/metering-prometheus-connection.adoc delete mode 100644 modules/metering-reports.adoc delete mode 100644 modules/metering-store-data-in-azure.adoc delete mode 100644 modules/metering-store-data-in-gcp.adoc delete mode 100644 modules/metering-store-data-in-s3-compatible.adoc delete mode 100644 modules/metering-store-data-in-s3.adoc delete mode 100644 modules/metering-store-data-in-shared-volumes.adoc delete mode 100644 modules/metering-troubleshooting.adoc delete mode 100644 modules/metering-uninstall-crds.adoc delete mode 100644 modules/metering-uninstall.adoc delete mode 100644 modules/metering-use-mysql-or-postgresql-for-hive.adoc delete mode 100644 modules/metering-viewing-report-results.adoc delete mode 100644 modules/metering-writing-reports.adoc delete mode 100644 modules/mod-docs-ocp-conventions.adoc delete mode 100644 modules/multi-architecture-scheduling-overview.adoc delete mode 100644 modules/nbde-managing-encryption-keys.adoc delete mode 100644 modules/nw-multinetwork-sriov.adoc delete mode 100644 modules/nw-pdncc-view.adoc delete mode 100644 modules/nw-pod-network-connectivity-configuration.adoc delete mode 100644 modules/nw-secondary-ext-gw-status.adoc delete mode 100644 modules/nw-sriov-about-all-multi-cast_mode.adoc delete mode 100644 modules/persistent-storage-csi-cloning-using.adoc delete mode 100644 modules/providing-direct-documentation-feedback.adoc delete mode 100644 modules/rbac-updating-policy-definitions.adoc delete mode 100644 modules/running-modified-installation.adoc delete mode 100644 modules/service-accounts-adding-secrets.adoc delete mode 100644 modules/service-accounts-managing-secrets.adoc delete mode 100644 modules/sts-mode-installing-manual-run-installer.adoc delete mode 100644 modules/understanding-installation.adoc delete mode 100644 modules/updating-troubleshooting-clear.adoc delete mode 100644 modules/virt-early-access-releases.adoc delete mode 100644 modules/virt-importing-vm-wizard.adoc diff --git a/modules/annotating-a-route-with-a-cookie-name.adoc b/modules/annotating-a-route-with-a-cookie-name.adoc deleted file mode 100644 index 83b40c321f80..000000000000 --- a/modules/annotating-a-route-with-a-cookie-name.adoc +++ /dev/null @@ -1,37 +0,0 @@ -// Module included in the following assemblies: -// -// *using-cookies-to-keep-route-statefulness - -:_mod-docs-content-type: PROCEDURE -[id="annotating-a-route-with-a-cookie_{context}"] -= Annotating a route with a cookie - -You can set a cookie name to overwrite the default, auto-generated one for the -route. This allows the application receiving route traffic to know the cookie -name. By deleting the cookie it can force the next request to re-choose an -endpoint. So, if a server was overloaded it tries to remove the requests from the -client and redistribute them. - -.Procedure - -. Annotate the route with the desired cookie name: -+ -[source,terminal] ----- -$ oc annotate route router.openshift.io/="-" ----- -+ -For example, to annotate the cookie name of `my_cookie` to the `my_route` with -the annotation of `my_cookie_anno`: -+ -[source,terminal] ----- -$ oc annotate route my_route router.openshift.io/my_cookie="-my_cookie_anno" ----- - -. Save the cookie, and access the route: -+ -[source,terminal] ----- -$ curl $my_route -k -c /tmp/my_cookie ----- diff --git a/modules/builds-output-image-digest.adoc b/modules/builds-output-image-digest.adoc deleted file mode 100644 index c4bb8b1e0aa0..000000000000 --- a/modules/builds-output-image-digest.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * unused_topics/builds-output-image-digest - -[id="builds-output-image-digest_{context}"] -= Output image digest - -Built images can be uniquely identified by their digest, which can -later be used to pull the image by digest regardless of its current tag. - -ifdef::openshift-enterprise,openshift-webscale,openshift-origin[] -`Docker` and -endif::[] -`Source-to-Image (S2I)` builds store the digest in -`Build.status.output.to.imageDigest` after the image is pushed to a registry. -The digest is computed by the registry. Therefore, it may not always be present, -for example when the registry did not return a digest, or when the builder image -did not understand its format. - -.Built Image Digest After a Successful Push to the Registry -[source,yaml] ----- -status: - output: - to: - imageDigest: sha256:29f5d56d12684887bdfa50dcd29fc31eea4aaf4ad3bec43daf19026a7ce69912 ----- - -[role="_additional-resources"] -.Additional resources -* link:https://docs.docker.com/registry/spec/api/#/content-digests[Docker Registry HTTP API V2: digest] -* link:https://docs.docker.com/engine/reference/commandline/pull/#/pull-an-image-by-digest-immutable-identifier[`docker pull`: pull the image by digest] diff --git a/modules/cco-short-term-creds-component-permissions-gcp.adoc b/modules/cco-short-term-creds-component-permissions-gcp.adoc deleted file mode 100644 index 1248c24440dd..000000000000 --- a/modules/cco-short-term-creds-component-permissions-gcp.adoc +++ /dev/null @@ -1,9 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/managing_cloud_provider_credentials/cco-short-term-creds.adoc - -:_mod-docs-content-type: REFERENCE -[id="cco-short-term-creds-component-permissions-gcp_{context}"] -= GCP component secret permissions requirements - -//This topic is a placeholder for when GCP role granularity can bbe documented \ No newline at end of file diff --git a/modules/completing-installation.adoc b/modules/completing-installation.adoc deleted file mode 100644 index a3d3235f7312..000000000000 --- a/modules/completing-installation.adoc +++ /dev/null @@ -1,33 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="completing-installation_{context}"] -= Completing and verifying the {product-title} installation - -When the bootstrap node is done with its work and has handed off control to the new {product-title} cluster, the bootstrap node is destroyed. The installation program waits for the cluster to initialize, creates a route to the {product-title} console, and presents the information and credentials you require to log in to the cluster. Here’s an example: - ----- -INFO Install complete!                                 - -INFO Run 'export KUBECONFIG=/home/joe/ocp/auth/kubeconfig' to manage the cluster with 'oc', the {product-title} CLI. - -INFO The cluster is ready when 'oc login -u kubeadmin -p ' succeeds (wait a few minutes). - -INFO Access the {product-title} web-console here: https://console-openshift-console.apps.mycluster.devel.example.com - -INFO Login to the console with user: kubeadmin, password: "password" ----- - -To access the {product-title} cluster from your web browser, log in as kubeadmin with the password, using the URL shown: - -     https://console-openshift-console.apps.mycluster.devel.example.com - -To access the {product-title} cluster from the command line, identify the location of the credentials file (export the KUBECONFIG variable) and log in as kubeadmin with the provided password: ----- -$ export KUBECONFIG=/home/joe/ocp/auth/kubeconfig - -$ oc login -u kubeadmin -p ----- - -At this point, you can begin using the {product-title} cluster. To understand the management of your {product-title} cluster going forward, you should explore the {product-title} control plane. diff --git a/modules/configmap-create.adoc b/modules/configmap-create.adoc deleted file mode 100644 index 5284caf4cfe0..000000000000 --- a/modules/configmap-create.adoc +++ /dev/null @@ -1,17 +0,0 @@ -// Module included in the following assemblies: -// -// * builds/setting-up-trusted-ca - -[id="configmap-create_{context}"] -= Creating a ConfigMap - -You can use the following command to create a ConfigMap from -directories, specific files, or literal values. - -.Procedure - -* Create a ConfigMap: - ----- -$ oc create configmap [options] ----- diff --git a/modules/configmap-overview.adoc b/modules/configmap-overview.adoc deleted file mode 100644 index 6c7ba1fcbbb6..000000000000 --- a/modules/configmap-overview.adoc +++ /dev/null @@ -1,62 +0,0 @@ -// Module included in the following assemblies: -// -// * builds/setting-up-trusted-ca - -[id="configmap-overview_{context}"] -= Understanding ConfigMaps - -Many applications require configuration using some combination of configuration -files, command line arguments, and environment variables. These configuration -artifacts are decoupled from image content in order to keep containerized -applications portable. - -The ConfigMap object provides mechanisms to inject containers with -configuration data while keeping containers agnostic of {product-title}. A -ConfigMap can be used to store fine-grained information like individual -properties or coarse-grained information like entire configuration files or JSON -blobs. - -The ConfigMap API object holds key-value pairs of configuration data that -can be consumed in pods or used to store configuration data for system -components such as controllers. ConfigMap is similar to secrets, but -designed to more conveniently support working with strings that do not contain -sensitive information. For example: - -.ConfigMap Object Definition -[source,yaml] ----- -kind: ConfigMap -apiVersion: v1 -metadata: - creationTimestamp: 2016-02-18T19:14:38Z - name: example-config - namespace: default -data: <1> - example.property.1: hello - example.property.2: world - example.property.file: |- - property.1=value-1 - property.2=value-2 - property.3=value-3 -binaryData: - bar: L3Jvb3QvMTAw <2> ----- -<1> Contains the configuration data. -<2> Points to a file that contains non-UTF8 data, for example, a binary Java keystore file. -Enter the file data in Base 64. - -[NOTE] -==== -You can use the `binaryData` field when you create a ConfigMap from a binary -file, such as an image. -==== - -Configuration data can be consumed in pods in a variety of ways. A ConfigMap -can be used to: - -1. Populate the value of environment variables. -2. Set command-line arguments in a container. -3. Populate configuration files in a volume. - -Both users and system components can store configuration data in a -ConfigMap. diff --git a/modules/configuration-resource-overview.adoc b/modules/configuration-resource-overview.adoc deleted file mode 100644 index ceef1e265043..000000000000 --- a/modules/configuration-resource-overview.adoc +++ /dev/null @@ -1,68 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="configuration-resource-overview_{context}"] -= About Configuration Resources in {product-title} - -You perform many customization and configuration tasks after you deploy your -cluster, including configuring networking and setting your identity provider. - -In {product-title}, you modify Configuration Resources to determine the behavior -of these integrations. The Configuration Resources are controlled by Operators -that are managed by the Cluster Version Operator, which manages all of the -Operators that run your cluster's control plane. - -You can customize the following Configuration Resources: - -[cols="3a,8a",options="header"] -|=== - -|Configuration Resource |Description -|Authentication -| - -|DNS -| - -|Samples -| * *ManagementState:* -** *Managed.* The operator updates the samples as the configuration dictates. -** *Unmanaged.* The operator ignores updates to the samples resource object and -any imagestreams or templates in the `openshift` namespace. -** *Removed.* The operator removes the set of managed imagestreams -and templates in the `openshift` namespace. It ignores new samples created by -the cluster administrator or any samples in the skipped lists. After the removals are -complete, the operator works like it is in the `Unmanaged` state and ignores -any watch events on the sample resources, imagestreams, or templates. It -operates on secrets to facilitate the CENTOS to RHEL switch. There are some -caveats around concurrent create and removal. -* *Samples Registry:* Overrides the registry from which images are imported. -* *Architecture:* Place holder to choose an architecture type. Currently only x86 -is supported. -* *Skipped Imagestreams:* Imagestreams that are in the operator's -inventory, but that the cluster administrator wants the operator to ignore or not manage. -* *Skipped Templates:* Templates that are in the operator's inventory, but that -the cluster administrator wants the operator to ignore or not manage. - -|Infrastructure -| - -|Ingress -| - -|Network -| - -|OAuth -| - -|=== - -While you can complete many other customizations and configure other integrations -with an {product-title} cluster, configuring these resources is a common first -step after you deploy a cluster. - -Like all Operators, the Configuration Resources are governed by -Custom Resource Definitions (CRD). You customize the CRD for each -Configuration Resource that you want to modify in your cluster. diff --git a/modules/configuring-layer-three-routed-topology.adoc b/modules/configuring-layer-three-routed-topology.adoc deleted file mode 100644 index 0b614d892a69..000000000000 --- a/modules/configuring-layer-three-routed-topology.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/multiple_networks/configuring-additional-network.adoc - -:_mod-docs-content-type: CONCEPT -[id="configuration-layer-three-routed-topology_{context}"] -= Configuration for a routed topology - -The routed (layer 3) topology networks are a simplified topology for the cluster default network without egress or ingress. In this topology, there is one logical switch per node, each with a different subnet, and a router interconnecting all logical switches. - -This configuration can be used for IPv6 and dual-stack deployments. - -[NOTE] -==== -* Layer 3 routed topology networks only allow for the transfer of data packets between pods within a cluster. -* Creating a secondary network with an IPv6 subnet or dual-stack subnets fails on a single-stack {product-title} cluster. This is a known limitation and will be fixed a future version of {product-title}. -==== - -The following `NetworkAttachmentDefinition` custom resource definition (CRD) YAML describes the fields needed to configure a routed secondary network. - -[source,yaml] ----- - { - "cniVersion": "0.3.1", - "name": "ns1-l3-network", - "type": "ovn-k8s-cni-overlay", - "topology":"layer3", - "subnets": "10.128.0.0/16/24", - "mtu": 1300, - "netAttachDefName": "ns1/l3-network" - } ----- \ No newline at end of file diff --git a/modules/cpmso-feat-vertical-resize.adoc b/modules/cpmso-feat-vertical-resize.adoc deleted file mode 100644 index c0f3ace4e601..000000000000 --- a/modules/cpmso-feat-vertical-resize.adoc +++ /dev/null @@ -1,7 +0,0 @@ -// Module included in the following assemblies: -// -// * machine_management/cpmso-about.adoc - -:_mod-docs-content-type: CONCEPT -[id="cpmso-feat-vertical-resize_{context}"] -= Vertical resizing of the control plane \ No newline at end of file diff --git a/modules/creating-your-first-content.adoc b/modules/creating-your-first-content.adoc deleted file mode 100644 index 041c17231c11..000000000000 --- a/modules/creating-your-first-content.adoc +++ /dev/null @@ -1,109 +0,0 @@ -// Module included in the following assemblies: -// -// assembly_getting-started-modular-docs-ocp.adoc - -// Base the file name and the ID on the module title. For example: -// * file name: doing-procedure-a.adoc -// * ID: [id="doing-procedure-a"] -// * Title: = Doing procedure A - -:_mod-docs-content-type: PROCEDURE -[id="creating-your-first-content_{context}"] -= Creating your first content - -In this procedure, you will create your first example content using modular -docs for the OpenShift docs repository. - -.Prerequisites - -* You have forked and then cloned the OpenShift docs repository locally. -* You have downloaded and are using Atom text editor for creating content. -* You have installed AsciiBinder (the build tool for OpenShift docs). - -.Procedure - -. Navigate to your locally cloned OpenShift docs repository on a command line. - -. Create a new feature branch: - -+ ----- -git checkout master -git checkout -b my_first_mod_docs ----- -+ -. If there is no `modules` directory in the root folder, create one. - -. In this `modules` directory, create a file called `my-first-module.adoc`. - -. Open this newly created file in Atom and copy into this file the contents from -the link:https://raw.githubusercontent.com/redhat-documentation/modular-docs/master/modular-docs-manual/files/TEMPLATE_PROCEDURE_doing-one-procedure.adoc[procedure template] -from Modular docs repository. - -. Replace the content in this file with some example text using the guidelines -in the comments. Give this module the title `My First Module`. Save this file. -You have just created your first module. - -. Create a new directory from the root of your OpenShift docs repository and -call it `my_guide`. - -. In this my_guide directory, create a new file called -`assembly_my-first-assembly.adoc`. - -. Open this newly created file in Atom and copy into this file the contents from -the link:https://raw.githubusercontent.com/redhat-documentation/modular-docs/master/modular-docs-manual/files/TEMPLATE_ASSEMBLY_a-collection-of-modules.adoc[assembly template] -from Modular docs repository. - -. Replace the content in this file with some example text using the guidelines -in the comments. Give this assembly the title: `My First Assembly`. - -. Before the first anchor id in this assembly file, add a `:context:` attribute: - -+ -`:context: assembly-first-content` - -. After the Prerequisites section, add the module created earlier (the following is -deliberately spelled incorrectly to pass validation. Use 'include' instead of 'ilude'): - -+ -`ilude::modules/my-first-module.adoc[leveloffset=+1]` - -+ -Remove the other includes that are present in this file. Save this file. - -. Open up `my-first-module.adoc` in the `modules` folder. At the top of -this file, in the comments section, add the following to indicate in which -assembly this module is being used: - -+ ----- -// Module included in the following assemblies: -// -// my_guide/assembly_my-first-assembly.adoc ----- - -. Open up `_topic_map.yml` from the root folder and add these lines at the end -of this file and then save. - -+ ----- ---- -Name: OpenShift CCS Mod Docs First Guide -Dir: my_guide -Distros: openshift-* -Topics: -- Name: My First Assembly - File: assembly_my-first-assembly ----- - -. On the command line, run `asciibinder` from the root folder of openshift-docs. -You don't have to add or commit your changes for asciibinder to run. - -. After the asciibinder build completes, open up your browser and navigate to -/openshift-docs/_preview/openshift-enterprise/my_first_mod_docs/my_guide/assembly_my-first-assembly.html - -. Confirm that your book `my_guide` has an assembly `My First Assembly` with the -contents from your module `My First Module`. - -NOTE: You can delete this branch now if you are done testing. This branch -shouldn't be submitted to the upstream openshift-docs repository. diff --git a/modules/efk-logging-deploying-storage-considerations.adoc b/modules/efk-logging-deploying-storage-considerations.adoc deleted file mode 100644 index 6ee41b33d064..000000000000 --- a/modules/efk-logging-deploying-storage-considerations.adoc +++ /dev/null @@ -1,108 +0,0 @@ -// Module included in the following assemblies: -// -// * logging/efk-logging-deploy.adoc - -[id="efk-logging-deploy-storage-considerations_{context}"] -= Storage considerations for cluster logging and {product-title} - -An Elasticsearch index is a collection of primary shards and its corresponding replica -shards. This is how ES implements high availability internally, therefore there -is little requirement to use hardware based mirroring RAID variants. RAID 0 can still -be used to increase overall disk performance. - -//Following paragraph also in nodes/efk-logging-elasticsearch - -Elasticsearch is a memory-intensive application. Each Elasticsearch node needs 16G of memory for both memory requests and CPU limits -unless you specify otherwise the Cluster Logging Custom Resource. The initial set of {product-title} nodes might not be large enough -to support the Elasticsearch cluster. You must add additional nodes to the {product-title} cluster to run with the recommended -or higher memory. Each Elasticsearch node can operate with a lower memory setting though this is not recommended for production deployments. - -//// -Each Elasticsearch data node requires its own individual storage, but an {product-title} deployment -can only provide volumes shared by all of its pods, which again means that -Elasticsearch clusters should not be implemented with a single deployment. -//// - -A persistent volume is required for each Elasticsearch deployment to have one data volume per data node. On {product-title} this is achieved using -Persistent Volume Claims. - -The Elasticsearch Operator names the PVCs using the Elasticsearch resource name. Refer to -Persistent Elasticsearch Storage for more details. - -Below are capacity planning guidelines for {product-title} aggregate logging. - -*Example scenario* - -Assumptions: - -. Which application: Apache -. Bytes per line: 256 -. Lines per second load on application: 1 -. Raw text data -> JSON - -Baseline (256 characters per minute -> 15KB/min) - -[cols="3,4",options="header"] -|=== -|Logging Pods -|Storage Throughput - -|3 es -1 kibana -1 curator -1 fluentd -| 6 pods total: 90000 x 86400 = 7,7 GB/day - -|3 es -1 kibana -1 curator -11 fluentd -| 16 pods total: 225000 x 86400 = 24,0 GB/day - -|3 es -1 kibana -1 curator -20 fluentd -|25 pods total: 225000 x 86400 = 32,4 GB/day -|=== - - -Calculating total logging throughput and disk space required for your {product-title} cluster requires knowledge of your applications. For example, if one of your -applications on average logs 10 lines-per-second, each 256 bytes-per-line, -calculate per-application throughput and disk space as follows: - ----- - (bytes-per-line * (lines-per-second) = 2560 bytes per app per second - (2560) * (number-of-pods-per-node,100) = 256,000 bytes per second per node - 256k * (number-of-nodes) = total logging throughput per cluster ----- - -Fluentd ships any logs from *systemd journal* and */var/log/containers/* to Elasticsearch. - -//// -Local SSD drives are recommended in order to achieve the best performance. In -Red Hat Enterprise Linux (RHEL) 7, the -link:https://access.redhat.com/articles/425823[deadline] IO scheduler is the -default for all block devices except SATA disks. For SATA disks, the default IO -scheduler is *cfq*. -//// - -Therefore, consider how much data you need in advance and that you are -aggregating application log data. Some Elasticsearch users have found that it -is necessary to -link:https://signalfx.com/blog/how-we-monitor-and-run-elasticsearch-at-scale/[keep -absolute storage consumption around 50% and below 70% at all times]. This -helps to avoid Elasticsearch becoming unresponsive during large merge -operations. - -By default, at 85% ES stops allocating new data to the node, at 90% ES starts de-allocating -existing shards from that node to other nodes if possible. But if no nodes have -free capacity below 85% then ES will effectively reject creating new indices -and becomes RED. - -[NOTE] -==== -These low and high watermark values are Elasticsearch defaults in the current release. You can modify these values, -but you must also apply any modifications to the alerts also. The alerts are based -on these defaults. -==== diff --git a/modules/feature-gate-features.adoc b/modules/feature-gate-features.adoc deleted file mode 100644 index 70df7a59097f..000000000000 --- a/modules/feature-gate-features.adoc +++ /dev/null @@ -1,34 +0,0 @@ -// Module included in the following assemblies: -// -// * nodes/nodes-cluster-disabling-features.adoc -// * nodes/nodes-cluster-enabling-features.adoc - -[id="feature-gate-features_{context}"] -= Features that are affected by FeatureGates - -The following features are affected by FeatureGates: - -[options="header"] -|=== -| FeatureGate| Description| Default - -|`RotateKubeletServerCertificate` -|Enables the rotation of the server TLS certificate on the cluster. -|True - -|`SupportPodPidsLimit` -|Enables support for limiting the number of processes (PIDs) running in a pod. -|True - -|`MachineHealthCheck` -|Enables automatically repairing unhealthy machines in a machine pool. -|True - -|`LocalStorageCapacityIsolation` -|Enable the consumption of local ephemeral storage and also the `sizeLimit` property of an `emptyDir` volume. -|False - -|=== - -You can enable these features by editing the Feature Gate Custom Resource. -Turning on these features cannot be undone and prevents the ability to upgrade your cluster. diff --git a/modules/importing-manifest-list-through-imagestreamimport.adoc b/modules/importing-manifest-list-through-imagestreamimport.adoc deleted file mode 100644 index 395ad21eb4c5..000000000000 --- a/modules/importing-manifest-list-through-imagestreamimport.adoc +++ /dev/null @@ -1,44 +0,0 @@ -// Module included in the following assemblies: -// * openshift_images/image-streams-manage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="importing-manifest-list-through-imagestreamimport_{context}"] -= Importing a manifest list through ImageStreamImport - - -You can use the `ImageStreamImport` resource to find and import image manifests from other container image registries into the cluster. Individual images or an entire image repository can be imported. - -Use the following procedure to import a manifest list through the `ImageStreamImport` object with the `importMode` value. - -.Procedure - -. Create an `ImageStreamImport` YAML file and set the `importMode` parameter to `PreserveOriginal` on the tags that you will import as a manifest list: -+ -[source,yaml] ----- -apiVersion: image.openshift.io/v1 -kind: ImageStreamImport -metadata: - name: app - namespace: myapp -spec: - import: true - images: - - from: - kind: DockerImage - name: // - to: - name: latest - referencePolicy: - type: Source - importPolicy: - importMode: "PreserveOriginal" ----- - -. Create the `ImageStreamImport` by running the following command: -+ -[source,terminal] ----- -$ oc create -f ----- - diff --git a/modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc b/modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc deleted file mode 100644 index f46247c6a305..000000000000 --- a/modules/install-sno_additional-requirements-for-installing-on-a-single-node-on-a-cloud-provider.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// This module is included in the following assemblies: -// -// installing/installing_sno/install-sno-preparing-to-install-sno.adoc - -:_mod-docs-content-type: CONCEPT -[id="additional-requirements-for-installing-sno-on-a-cloud-provider_{context}"] -= Additional requirements for installing {sno} on a cloud provider - -The AWS documentation for installer-provisioned installation is written with a high availability cluster consisting of three control plane nodes. When referring to the AWS documentation, consider the differences between the requirements for a {sno} cluster and a high availability cluster. - -* The required machines for cluster installation in AWS documentation indicates a temporary bootstrap machine, three control plane machines, and at least two compute machines. You require only a temporary bootstrap machine and one AWS instance for the control plane node and no worker nodes. - -* The minimum resource requirements for cluster installation in the AWS documentation indicates a control plane node with 4 vCPUs and 100GB of storage. For a single node cluster, you must have a minimum of 8 vCPU cores and 120GB of storage. - -* The `controlPlane.replicas` setting in the `install-config.yaml` file should be set to `1`. - -* The `compute.replicas` setting in the `install-config.yaml` file should be set to `0`. -This makes the control plane node schedulable. diff --git a/modules/installation-about-custom.adoc b/modules/installation-about-custom.adoc deleted file mode 100644 index 8e26117c63b6..000000000000 --- a/modules/installation-about-custom.adoc +++ /dev/null @@ -1,52 +0,0 @@ -// Module included in the following assemblies: -// -// * orphaned - -[id="installation-about-custom_{context}"] -= About the custom installation - -You can use the {product-title} installation program to customize four levels -of the program: - -* {product-title} itself -* The cluster platform -* Kubernetes -* The cluster operating system - -Changes to {product-title} and its platform are managed and supported, but -changes to Kubernetes and the cluster operating system currently are not. If -you customize unsupported levels program levels, future installation and -upgrades might fail. - -When you select values for the prompts that the installation program presents, -you customize {product-title}. You can further modify the cluster platform -by modifying the `install-config.yaml` file that the installation program -uses to deploy your cluster. In this file, you can make changes like setting the -number of machines that the control plane uses, the type of virtual machine -that the cluster deploys, or the CIDR range for the Kubernetes service network. - -It is possible, but not supported, to modify the Kubernetes objects that are injected into the cluster. -A common modification is additional manifests in the initial installation. -No validation is available to confirm the validity of any modifications that -you make to these manifests, so if you modify these objects, you might render -your cluster non-functional. -[IMPORTANT] -==== -Modifying the Kubernetes objects is not supported. -==== - -Similarly it is possible, but not supported, to modify the -Ignition config files for the bootstrap and other machines. No validation is -available to confirm the validity of any modifications that -you make to these Ignition config files, so if you modify these objects, you might render -your cluster non-functional. - -[IMPORTANT] -==== -Modifying the Ignition config files is not supported. -==== - -To complete a custom installation, you use the installation program to generate -the installation files and then customize them. -The installation status is stored in a hidden -file in the asset directory and contains all of the installation files. diff --git a/modules/installation-azure-finalizing-encryption.adoc b/modules/installation-azure-finalizing-encryption.adoc deleted file mode 100644 index faeb1032349b..000000000000 --- a/modules/installation-azure-finalizing-encryption.adoc +++ /dev/null @@ -1,155 +0,0 @@ -//Module included in the following assemblies: -// -// * installing/installing_azure/installing-azure-customizations.adoc -// * installing/installing_azure/installing-azure-government-region.adoc -// * installing/installing_azure/installing-azure-network-customizations.adoc -// * installing/installing_azure/installing-azure-private.adoc -// * installing/installing_azure/installing-azure-vnet.adoc - - -ifeval::["{context}" == "installing-azure-customizations"] -:azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-government-region"] -:azure-gov: -endif::[] -ifeval::["{context}" == "installing-azure-network-customizations"] -:azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-private"] -:azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-vnet"] -:azure-public: -endif::[] - -:_mod-docs-content-type: PROCEDURE -[id="finalizing-encryption_{context}"] -= Finalizing user-managed encryption after installation -If you installed {product-title} using a user-managed encryption key, you can complete the installation by creating a new storage class and granting write permissions to the Azure cluster resource group. - -.Procedure - -. Obtain the identity of the cluster resource group used by the installer: -.. If you specified an existing resource group in `install-config.yaml`, obtain its Azure identity by running the following command: -+ -[source,terminal] ----- -$ az identity list --resource-group "" ----- -.. If you did not specify a existing resource group in `install-config.yaml`, locate the resource group that the installer created, and then obtain its Azure identity by running the following commands: -+ -[source,terminal] ----- -$ az group list ----- -+ -[source,terminal] ----- -$ az identity list --resource-group "" ----- -+ -. Grant a role assignment to the cluster resource group so that it can write to the Disk Encryption Set by running the following command: -+ -[source,terminal] ----- -$ az role assignment create --role "" \// <1> - --assignee "" <2> ----- -<1> Specifies an Azure role that has read/write permissions to the disk encryption set. You can use the `Owner` role or a custom role with the necessary permissions. -<2> Specifies the identity of the cluster resource group. -+ -. Obtain the `id` of the disk encryption set you created prior to installation by running the following command: -+ -[source,terminal] ----- -$ az disk-encryption-set show -n \// <1> - --resource-group <2> ----- -<1> Specifies the name of the disk encryption set. -<2> Specifies the resource group that contains the disk encryption set. -The `id` is in the format of `"/subscriptions/.../resourceGroups/.../providers/Microsoft.Compute/diskEncryptionSets/..."`. -+ -. Obtain the identity of the cluster service principal by running the following command: -+ -[source,terminal] ----- -$ az identity show -g \// <1> - -n \// <2> - --query principalId --out tsv ----- -<1> Specifies the name of the cluster resource group created by the installation program. -<2> Specifies the name of the cluster service principal created by the installation program. -The identity is in the format of `12345678-1234-1234-1234-1234567890`. -ifdef::azure-gov[] -. Create a role assignment that grants the cluster service principal `Contributor` privileges to the disk encryption set by running the following command: -+ -[source,terminal] ----- -$ az role assignment create --assignee \// <1> - --role 'Contributor' \// - --scope \// <2> ----- -<1> Specifies the ID of the cluster service principal obtained in the previous step. -<2> Specifies the ID of the disk encryption set. -endif::azure-gov[] -ifdef::azure-public[] -. Create a role assignment that grants the cluster service principal necessary privileges to the disk encryption set by running the following command: -+ -[source,terminal] ----- -$ az role assignment create --assignee \// <1> - --role \// <2> - --scope \// <3> ----- -<1> Specifies the ID of the cluster service principal obtained in the previous step. -<2> Specifies the Azure role name. You can use the `Contributor` role or a custom role with the necessary permissions. -<3> Specifies the ID of the disk encryption set. -endif::azure-public[] -+ -. Create a storage class that uses the user-managed disk encryption set: -.. Save the following storage class definition to a file, for example `storage-class-definition.yaml`: -+ -[source,yaml] ----- -kind: StorageClass -apiVersion: storage.k8s.io/v1 -metadata: - name: managed-premium -provisioner: kubernetes.io/azure-disk -parameters: - skuname: Premium_LRS - kind: Managed - diskEncryptionSetID: "" <1> - resourceGroup: "" <2> -reclaimPolicy: Delete -allowVolumeExpansion: true -volumeBindingMode: WaitForFirstConsumer ----- -<1> Specifies the ID of the disk encryption set that you created in the prerequisite steps, for example `"/subscriptions/xxxxxx-xxxxx-xxxxx/resourceGroups/test-encryption/providers/Microsoft.Compute/diskEncryptionSets/disk-encryption-set-xxxxxx"`. -<2> Specifies the name of the resource group used by the installer. This is the same resource group from the first step. -.. Create the storage class `managed-premium` from the file you created by running the following command: -+ -[source,terminal] ----- -$ oc create -f storage-class-definition.yaml ----- -. Select the `managed-premium` storage class when you create persistent volumes to use encrypted storage. - - - -ifeval::["{context}" == "installing-azure-customizations"] -:!azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-government-region"] -:!azure-gov: -endif::[] -ifeval::["{context}" == "installing-azure-network-customizations"] -:!azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-private"] -:!azure-public: -endif::[] -ifeval::["{context}" == "installing-azure-vnet"] -:!azure-public: -endif::[] \ No newline at end of file diff --git a/modules/installation-creating-worker-machineset.adoc b/modules/installation-creating-worker-machineset.adoc deleted file mode 100644 index fab07717826c..000000000000 --- a/modules/installation-creating-worker-machineset.adoc +++ /dev/null @@ -1,144 +0,0 @@ -// Module included in the following assemblies: -// -// * none - -[id="installation-creating-worker-machineset_{context}"] -= Creating worker nodes that the cluster manages - -After your cluster initializes, you can create workers that are controlled by -a MachineSet in your Amazon Web Services (AWS) user-provisioned Infrastructure -cluster. - -.Prerequisites - -* Install a cluster on AWS using infrastructer that you provisioned. - -.Procedure - -. Optional: Launch worker nodes that are controlled by the machine API. -. View the list of MachineSets in the `openshift-machine-api` namespace: -+ ----- -$ oc get machinesets --namespace openshift-machine-api -NAME DESIRED CURRENT READY AVAILABLE AGE -test-tkh7l-worker-us-east-2a 1 1 11m -test-tkh7l-worker-us-east-2b 1 1 11m -test-tkh7l-worker-us-east-2c 1 1 11m ----- -+ -Note the `NAME` of each MachineSet. Because you use a different subnet than the -installation program expects, the worker MachineSets do not use the correct -network settings. You must edit each of these MachineSets. - -. Edit each worker MachineSet to provide the correct values for your cluster: -+ ----- -$ oc edit machineset --namespace openshift-machine-api test-tkh7l-worker-us-east-2a -o yaml -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - creationTimestamp: 2019-03-14T14:03:03Z - generation: 1 - labels: - machine.openshift.io/cluster-api-cluster: test-tkh7l - machine.openshift.io/cluster-api-machine-role: worker - machine.openshift.io/cluster-api-machine-type: worker - name: test-tkh7l-worker-us-east-2a - namespace: openshift-machine-api - resourceVersion: "2350" - selfLink: /apis/machine.openshift.io/v1beta1/namespaces/openshift-machine-api/machinesets/test-tkh7l-worker-us-east-2a - uid: e2a6c8a6-4661-11e9-a9b0-0296069fd3a2 -spec: - replicas: 1 - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: test-tkh7l - machine.openshift.io/cluster-api-machineset: test-tkh7l-worker-us-east-2a - template: - metadata: - creationTimestamp: null - labels: - machine.openshift.io/cluster-api-cluster: test-tkh7l - machine.openshift.io/cluster-api-machine-role: worker - machine.openshift.io/cluster-api-machine-type: worker - machine.openshift.io/cluster-api-machineset: test-tkh7l-worker-us-east-2a - spec: - metadata: - creationTimestamp: null - providerSpec: - value: - ami: - id: ami-07e0e0e0035b5a3fe <1> - apiVersion: awsproviderconfig.openshift.io/v1beta1 - blockDevices: - - ebs: - iops: 0 - volumeSize: 120 - volumeType: gp2 - credentialsSecret: - name: aws-cloud-credentials - deviceIndex: 0 - iamInstanceProfile: - id: test-tkh7l-worker-profile - instanceType: m4.large - kind: AWSMachineProviderConfig - metadata: - creationTimestamp: null - placement: - availabilityZone: us-east-2a - region: us-east-2 - publicIp: null - securityGroups: - - filters: - - name: tag:Name - values: - - test-tkh7l-worker-sg <2> - subnet: - filters: - - name: tag:Name - values: - - test-tkh7l-private-us-east-2a - tags: - - name: kubernetes.io/cluster/test-tkh7l - value: owned - userDataSecret: - name: worker-user-data - versions: - kubelet: "" -status: - fullyLabeledReplicas: 1 - observedGeneration: 1 - replicas: 1 ----- -<1> Specify the {op-system-first} AMI to use for your worker nodes. Use the same -value that you specified in the parameter values for your control plane and -bootstrap templates. -<2> Specify the name of the worker security group that you created in the form -`-worker-sg`. `` is the same -infrastructure name that you extracted from the Ignition config metadata, -which has the format `-`. - -//// -. Optional: Replace the `subnet` stanza with one that specifies the subnet -to deploy the machines on: -+ ----- -subnet: - filters: - - name: tag: <1> - values: - - test-tkh7l-private-us-east-2a <2> ----- -<1> Set the `` of the tag to `Name`, `ID`, or `ARN`. -<2> Specify the `Name`, `ID`, or `ARN` value for the subnet. This value must -match the `tag` type that you specify. -//// - -. View the machines in the `openshift-machine-api` namespace and confirm that -they are launching: -+ ----- -$ oc get machines --namespace openshift-machine-api -NAME INSTANCE STATE TYPE REGION ZONE AGE -test-tkh7l-worker-us-east-2a-hxlqn i-0e7f3a52b2919471e pending m4.4xlarge us-east-2 us-east-2a 3s ----- diff --git a/modules/installation-gcp-shared-vpc-ingress.adoc b/modules/installation-gcp-shared-vpc-ingress.adoc deleted file mode 100644 index 38aabec405af..000000000000 --- a/modules/installation-gcp-shared-vpc-ingress.adoc +++ /dev/null @@ -1,49 +0,0 @@ -// File included in the following assemblies: -// * installation/installing_gcp/installing-gcp-shared-vpc.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-gcp-shared-vpc-ingress_{context}"] -= Optional: Adding Ingress DNS records for shared VPC installations -If the public DNS zone exists in a host project outside the project where you installed your cluster, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard `*.apps.{baseDomain}.` or specific records. You can use A, CNAME, and other records per your requirements. - -.Prerequisites -* You completed the installation of {product-title} on GCP into a shared VPC. -* Your public DNS zone exists in a host project separate from the service project that contains your cluster. - -.Procedure -. Verify that the Ingress router has created a load balancer and populated the `EXTERNAL-IP` field by running the following command: -+ -[source,terminal] ----- -$ oc -n openshift-ingress get service router-default ----- -+ -.Example output -[source,terminal] ----- -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -router-default LoadBalancer 172.30.18.154 35.233.157.184 80:32288/TCP,443:31215/TCP 98 ----- -. Record the external IP address of the router by running the following command: -+ -[source,terminal] ----- -$ oc -n openshift-ingress get service router-default --no-headers | awk '{print $4}' ----- -. Add a record to your GCP public zone with the router's external IP address and the name `*.apps..`. You can use the `gcloud` command-line utility or the GCP web console. -. To add manual records instead of a wildcard record, create entries for each of the cluster's current routes. You can gather these routes by running the following command: -+ -[source,terminal] ----- -$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes ----- -+ -.Example output -[source,terminal] ----- -oauth-openshift.apps.your.cluster.domain.example.com -console-openshift-console.apps.your.cluster.domain.example.com -downloads-openshift-console.apps.your.cluster.domain.example.com -alertmanager-main-openshift-monitoring.apps.your.cluster.domain.example.com -prometheus-k8s-openshift-monitoring.apps.your.cluster.domain.example.com ----- diff --git a/modules/installation-localzone-generate-k8s-manifest.adoc b/modules/installation-localzone-generate-k8s-manifest.adoc deleted file mode 100644 index b305559295de..000000000000 --- a/modules/installation-localzone-generate-k8s-manifest.adoc +++ /dev/null @@ -1,195 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_aws/installing-aws-localzone.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-localzone-generate-k8s-manifest_{context}"] -= Creating the Kubernetes manifest files - -Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest files that the cluster needs to configure the machines. - -.Prerequisites - -* You obtained the {product-title} installation program. -* You created the `install-config.yaml` installation configuration file. -* You installed the `jq` package. - -.Procedure - -. Change to the directory that contains the {product-title} installation program and generate the Kubernetes manifests for the cluster by running the following command: -+ -[source,terminal] ----- -$ ./openshift-install create manifests --dir <1> ----- -+ -<1> For ``, specify the installation directory that -contains the `install-config.yaml` file you created. - -. Set the default Maximum Transmission Unit (MTU) according to the network plugin: -+ -[IMPORTANT] -==== -Generally, the Maximum Transmission Unit (MTU) between an Amazon EC2 instance in a Local Zone and an Amazon EC2 instance in the Region is 1300. See link:https://docs.aws.amazon.com/local-zones/latest/ug/how-local-zones-work.html[How Local Zones work] in the AWS documentation. -The cluster network MTU must be always less than the EC2 MTU to account for the overhead. The specific overhead is determined by your network plugin, for example: - -- OVN-Kubernetes: `100 bytes` -- OpenShift SDN: `50 bytes` - -The network plugin could provide additional features, like IPsec, that also must be decreased the MTU. Check the documentation for additional information. - -==== - -.. If you are using the `OVN-Kubernetes` network plugin, enter the following command: -+ -[source,terminal] ----- -$ cat < /manifests/cluster-network-03-config.yml -apiVersion: operator.openshift.io/v1 -kind: Network -metadata: - name: cluster -spec: - defaultNetwork: - ovnKubernetesConfig: - mtu: 1200 -EOF ----- - -.. If you are using the `OpenShift SDN` network plugin, enter the following command: -+ -[source,terminal] ----- -$ cat < /manifests/cluster-network-03-config.yml -apiVersion: operator.openshift.io/v1 -kind: Network -metadata: - name: cluster -spec: - defaultNetwork: - openshiftSDNConfig: - mtu: 1250 -EOF ----- - -. Create the machine set manifests for the worker nodes in your Local Zone. -.. Export a local variable that contains the name of the Local Zone that you opted your AWS account into by running the following command: -+ -[source,terminal] ----- -$ export LZ_ZONE_NAME="" <1> ----- -<1> For ``, specify the Local Zone that you opted your AWS account into, such as `us-east-1-nyc-1a`. - -.. Review the instance types for the location that you will deploy to by running the following command: -+ -[source,terminal] ----- -$ aws ec2 describe-instance-type-offerings \ - --location-type availability-zone \ - --filters Name=location,Values=${LZ_ZONE_NAME} - --region <1> ----- -<1> For ``, specify the name of the region that you will deploy to, such as `us-east-1`. - -.. Export a variable to define the instance type for the worker machines to deploy on the Local Zone subnet by running the following command: -+ -[source,terminal] ----- -$ export INSTANCE_TYPE="" <1> ----- -<1> Set `` to a tested instance type, such as `c5d.2xlarge`. - -.. Store the AMI ID as a local variable by running the following command: -+ -[source,terminal] ----- -$ export AMI_ID=$(grep ami - /openshift/99_openshift-cluster-api_worker-machineset-0.yaml \ - | tail -n1 | awk '{print$2}') ----- - -.. Store the subnet ID as a local variable by running the following command: -+ -[source,terminal] ----- -$ export SUBNET_ID=$(aws cloudformation describe-stacks --stack-name "" \ <1> - | jq -r '.Stacks[0].Outputs[0].OutputValue') ----- -<1> For ``, specify the name of the subnet stack that you created. - -.. Store the cluster ID as local variable by running the following command: -+ -[source,terminal] ----- -$ export CLUSTER_ID="$(awk '/infrastructureName: / {print $2}' /manifests/cluster-infrastructure-02-config.yml)" ----- - -.. Create the worker manifest file for the Local Zone that your VPC uses by running the following command: -+ -[source,terminal] ----- -$ cat < /openshift/99_openshift-cluster-api_worker-machineset-nyc1.yaml -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - labels: - machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID} - name: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME} - namespace: openshift-machine-api -spec: - replicas: 1 - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID} - machine.openshift.io/cluster-api-machineset: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME} - template: - metadata: - labels: - machine.openshift.io/cluster-api-cluster: ${CLUSTER_ID} - machine.openshift.io/cluster-api-machine-role: edge - machine.openshift.io/cluster-api-machine-type: edge - machine.openshift.io/cluster-api-machineset: ${CLUSTER_ID}-edge-${LZ_ZONE_NAME} - spec: - metadata: - labels: - zone_type: local-zone - zone_group: ${LZ_ZONE_NAME:0:-1} - node-role.kubernetes.io/edge: "" - taints: - - key: node-role.kubernetes.io/edge - effect: NoSchedule - providerSpec: - value: - ami: - id: ${AMI_ID} - apiVersion: machine.openshift.io/v1beta1 - blockDevices: - - ebs: - volumeSize: 120 - volumeType: gp2 - credentialsSecret: - name: aws-cloud-credentials - deviceIndex: 0 - iamInstanceProfile: - id: ${CLUSTER_ID}-worker-profile - instanceType: ${INSTANCE_TYPE} - kind: AWSMachineProviderConfig - placement: - availabilityZone: ${LZ_ZONE_NAME} - region: ${CLUSTER_REGION} - securityGroups: - - filters: - - name: tag:Name - values: - - ${CLUSTER_ID}-worker-sg - subnet: - id: ${SUBNET_ID} - publicIp: true - tags: - - name: kubernetes.io/cluster/${CLUSTER_ID} - value: owned - userDataSecret: - name: worker-user-data -EOF ----- diff --git a/modules/installation-osp-balancing-external-loads.adoc b/modules/installation-osp-balancing-external-loads.adoc deleted file mode 100644 index b9e153d5ae4c..000000000000 --- a/modules/installation-osp-balancing-external-loads.adoc +++ /dev/null @@ -1,121 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_openstack/installing-openstack-load-balancing.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-osp-balancing-external-loads_{context}"] -= Configuring an external load balancer - -Configure an external load balancer in {rh-openstack-first} to use your own load balancer, resolve external networking needs, or scale beyond what the default {product-title} load balancer can provide. - -The load balancer serves ports 6443, 443, and 80 to any users of the system. Port 22623 serves Ignition startup configurations to the {product-title} machines and must not be reachable from outside the cluster. - -.Prerequisites - -* Access to a {rh-openstack} administrator's account -* The https://docs.openstack.org/python-openstackclient/latest/[{rh-openstack} client] installed on the target environment - -.Procedure - -. Using the {rh-openstack} CLI, add floating IP addresses to all of the control plane machines: -+ -[source,terminal] ----- -$ openstack floating ip create --port master-port-0 ----- -+ -[source,terminal] ----- -$ openstack floating ip create --port master-port-1 ----- -+ -[source,terminal] ----- -$ openstack floating ip create --port master-port-2 ----- - -. View the new floating IPs: -+ -[source,terminal] ----- -$ openstack server list ----- - -. Incorporate the listed floating IP addresses into the load balancer configuration to allow access the cluster via port 6443. -+ -.A HAProxy configuration for port 6443 -[source,txt] ----- -listen -api-6443 - bind 0.0.0.0:6443 - mode tcp - balance roundrobin - server -master-2 :6443 check - server -master-0 :6443 check - server -master-1 :6443 check ----- - -. Repeat the previous three steps for ports 443 and 80. - -. Enable network access from the load balancer network to the control plane machines on ports 6443, 443, and 80: -+ -[source,terminal] ----- -$ openstack security group rule create master --remote-ip --ingress --protocol tcp --dst-port 6443 ----- -+ -[source,terminal] ----- -$ openstack security group rule create master --remote-ip --ingress --protocol tcp --dst-port 443 ----- -+ -[source,terminal] ----- -$ openstack security group rule create master --remote-ip --ingress --protocol tcp --dst-port 80 ----- - -[TIP] -You can also specify a particular IP address with `/32`. - -. Update the DNS entry for `api..` to point to the new load balancer: -+ -[source,txt] ----- - api.. ----- -+ -The external load balancer is now available. - -. Verify the load balancer's functionality by using the following curl command: -+ -[source,terminal] ----- -$ curl https://:6443/version --insecure ----- -+ -The output resembles the following example: -+ -[source,json] ----- -{ - "major": "1", - "minor": "11+", - "gitVersion": "v1.11.0+ad103ed", - "gitCommit": "ad103ed", - "gitTreeState": "clean", - "buildDate": "2019-01-09T06:44:10Z", - "goVersion": "go1.10.3", - "compiler": "gc", - "platform": "linux/amd64" -} ----- - -. Optional: Verify that the Ignition configuration files are available only from -within the cluster by running a curl command on port 22623 from outside the cluster: -+ -[source,terminal] ----- -$ curl https://:22623/config/master --insecure ----- -+ -The command fails. diff --git a/modules/installation-osp-creating-sr-iov-compute-machines.adoc b/modules/installation-osp-creating-sr-iov-compute-machines.adoc deleted file mode 100644 index dcfac67cdb0b..000000000000 --- a/modules/installation-osp-creating-sr-iov-compute-machines.adoc +++ /dev/null @@ -1,196 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_openstack/installing-openstack-user-sr-iov.adoc -// * installing/installing_openstack/installing-openstack-user-sr-iov-kuryr.adoc -// -// TODO: Get https://github.com/shiftstack/SRIOV-Compute-Nodes-Ansible-Automation into a supported -// repo, associate playbooks with individual releases, and then embed here. - -:_mod-docs-content-type: PROCEDURE -[id="installation-osp-creating-sr-iov-compute-machines_{context}"] -= Creating compute machines that run on SR-IOV networks - -After standing up the control plane, create compute machines that run on the SR-IOV networks that you created in "Creating SR-IOV networks for compute machines". - - -.Prerequisites -* You downloaded the modules in "Downloading playbook dependencies". -* You downloaded the playbooks in "Downloading the installation playbooks". -* The `metadata.yaml` file that the installation program created is in the same directory as the Ansible playbooks. -* The control plane is active. -* You created `radio` and `uplink` SR-IOV networks as described in "Creating SR-IOV networks for compute machines". - -.Procedure - -. On a command line, change the working directory to the location of the `inventory.yaml` and `common.yaml` files. - -. Add the `radio` and `uplink` networks to the end of the `inventory.yaml` file by using the `additionalNetworks` parameter: -+ -[source,yaml] ----- -.... -# If this value is non-empty, the corresponding floating IP will be -# attached to the bootstrap machine. This is needed for collecting logs -# in case of install failure. - os_bootstrap_fip: '203.0.113.20' - - additionalNetworks: - - id: radio - count: 4 <1> - type: direct - port_security_enabled: no - - id: uplink - count: 4 <1> - type: direct - port_security_enabled: no ----- -<1> The `count` parameter defines the number of SR-IOV virtual functions (VFs) to attach to each worker node. In this case, each network has four VFs. - -. Replace the content of the `compute-nodes.yaml` file with the following text: -+ -.`compute-nodes.yaml` -[%collapsible] -==== -[source,yaml] ----- -- import_playbook: common.yaml - -- hosts: all - gather_facts: no - - vars: - worker_list: [] - port_name_list: [] - nic_list: [] - - tasks: - # Create the SDN/primary port for each worker node - - name: 'Create the Compute ports' - os_port: - name: "{{ item.1 }}-{{ item.0 }}" - network: "{{ os_network }}" - security_groups: - - "{{ os_sg_worker }}" - allowed_address_pairs: - - ip_address: "{{ os_ingressVIP }}" - with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}" - register: ports - - # Tag each SDN/primary port with cluster name - - name: 'Set Compute ports tag' - command: - cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.1 }}-{{ item.0 }}" - with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}" - - - name: 'List the Compute Trunks' - command: - cmd: "openstack network trunk list" - when: os_networking_type == "Kuryr" - register: compute_trunks - - - name: 'Create the Compute trunks' - command: - cmd: "openstack network trunk create --parent-port {{ item.1.id }} {{ os_compute_trunk_name }}-{{ item.0 }}" - with_indexed_items: "{{ ports.results }}" - when: - - os_networking_type == "Kuryr" - - "os_compute_trunk_name|string not in compute_trunks.stdout" - - - name: ‘Call additional-port processing’ - include_tasks: additional-ports.yaml - - # Create additional ports in OpenStack - - name: ‘Create additionalNetworks ports’ - os_port: - name: "{{ item.0 }}-{{ item.1.name }}" - vnic_type: "{{ item.1.type }}" - network: "{{ item.1.uuid }}" - port_security_enabled: "{{ item.1.port_security_enabled|default(omit) }}" - no_security_groups: "{{ 'true' if item.1.security_groups is not defined else omit }}" - security_groups: "{{ item.1.security_groups | default(omit) }}" - with_nested: - - "{{ worker_list }}" - - "{{ port_name_list }}" - - # Tag the ports with the cluster info - - name: 'Set additionalNetworks ports tag' - command: - cmd: "openstack port set --tag {{ cluster_id_tag }} {{ item.0 }}-{{ item.1.name }}" - with_nested: - - "{{ worker_list }}" - - "{{ port_name_list }}" - - # Build the nic list to use for server create - - name: Build nic list - set_fact: - nic_list: "{{ nic_list | default([]) + [ item.name ] }}" - with_items: "{{ port_name_list }}" - - # Create the servers - - name: 'Create the Compute servers' - vars: - worker_nics: "{{ [ item.1 ] | product(nic_list) | map('join','-') | map('regex_replace', '(.*)', 'port-name=\\1') | list }}" - os_server: - name: "{{ item.1 }}" - image: "{{ os_image_rhcos }}" - flavor: "{{ os_flavor_worker }}" - auto_ip: no - userdata: "{{ lookup('file', 'worker.ign') | string }}" - security_groups: [] - nics: "{{ [ 'port-name=' + os_port_worker + '-' + item.0|string ] + worker_nics }}" - config_drive: yes - with_indexed_items: "{{ worker_list }}" - ----- -==== - -. Insert the following content into a local file that is called `additional-ports.yaml`: -+ -.`additional-ports.yaml` -[%collapsible] -==== -[source,yaml] ----- -# Build a list of worker nodes with indexes -- name: ‘Build worker list’ - set_fact: - worker_list: "{{ worker_list | default([]) + [ item.1 + '-' + item.0 | string ] }}" - with_indexed_items: "{{ [ os_compute_server_name ] * os_compute_nodes_number }}" - -# Ensure that each network specified in additionalNetworks exists -- name: ‘Verify additionalNetworks’ - os_networks_info: - name: "{{ item.id }}" - with_items: "{{ additionalNetworks }}" - register: network_info - -# Expand additionalNetworks by the count parameter in each network definition -- name: ‘Build port and port index list for additionalNetworks’ - set_fact: - port_list: "{{ port_list | default([]) + [ { - 'net_name' : item.1.id, - 'uuid' : network_info.results[item.0].openstack_networks[0].id, - 'type' : item.1.type|default('normal'), - 'security_groups' : item.1.security_groups|default(omit), - 'port_security_enabled' : item.1.port_security_enabled|default(omit) - } ] * item.1.count|default(1) }}" - index_list: "{{ index_list | default([]) + range(item.1.count|default(1)) | list }}" - with_indexed_items: "{{ additionalNetworks }}" - -# Calculate and save the name of the port -# The format of the name is cluster_name-worker-workerID-networkUUID(partial)-count -# i.e. fdp-nz995-worker-1-99bcd111-1 -- name: ‘Calculate port name’ - set_fact: - port_name_list: "{{ port_name_list | default([]) + [ item.1 | combine( {'name' : item.1.uuid | regex_search('([^-]+)') + '-' + index_list[item.0]|string } ) ] }}" - with_indexed_items: "{{ port_list }}" - when: port_list is defined ----- -==== - -. On a command line, run the `compute-nodes.yaml` playbook: -+ -[source,terminal] ----- -$ ansible-playbook -i inventory.yaml compute-nodes.yaml ----- diff --git a/modules/installation-osp-kuryr-settings-active.adoc b/modules/installation-osp-kuryr-settings-active.adoc deleted file mode 100644 index 554833a4a21a..000000000000 --- a/modules/installation-osp-kuryr-settings-active.adoc +++ /dev/null @@ -1,52 +0,0 @@ -// Module included in the following assemblies: -// -// * post_installation_configuration/network-configuration.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installation-osp-kuryr-settings-active_{context}"] -= Adjusting Kuryr ports pool settings in active deployments on {rh-openstack} - -You can use a custom resource (CR) to configure how Kuryr manages {rh-openstack-first} Neutron ports to control the speed and efficiency of pod creation on a deployed cluster. - -.Procedure - -. From a command line, open the Cluster Network Operator (CNO) CR for editing: -+ -[source,terminal] ----- -$ oc edit networks.operator.openshift.io cluster ----- - -. Edit the settings to meet your requirements. The following file is provided as an example: -+ -[source,yaml] ----- -apiVersion: operator.openshift.io/v1 -kind: Network -metadata: - name: cluster -spec: - clusterNetwork: - - cidr: 10.128.0.0/14 - hostPrefix: 23 - serviceNetwork: - - 172.30.0.0/16 - defaultNetwork: - type: Kuryr - kuryrConfig: - enablePortPoolsPrepopulation: false <1> - poolMinPorts: 1 <2> - poolBatchPorts: 3 <3> - poolMaxPorts: 5 <4> ----- -<1> Set `enablePortPoolsPrepopulation` to `true` to make Kuryr create Neutron ports when the first pod that is configured to use the dedicated network for pods is created in a namespace. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is `false`. -<2> Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of `poolMinPorts`. The default value is `1`. -<3> `poolBatchPorts` controls the number of new ports that are created if the number of free ports is lower than the value of `poolMinPorts`. The default value is `3`. -<4> If the number of free ports in a pool is higher than the value of `poolMaxPorts`, Kuryr deletes them until the number matches that value. Setting the value to `0` disables this upper bound, preventing pools from shrinking. The default value is `0`. - -. Save your changes and quit the text editor to commit your changes. - -[IMPORTANT] -==== -Modifying these options on a running cluster forces the kuryr-controller and kuryr-cni pods to restart. As a result, the creation of new pods and services will be delayed. -==== diff --git a/modules/installation-osp-setting-worker-affinity.adoc b/modules/installation-osp-setting-worker-affinity.adoc deleted file mode 100644 index 3dbab93b771b..000000000000 --- a/modules/installation-osp-setting-worker-affinity.adoc +++ /dev/null @@ -1,117 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_openstack/installing-openstack-installer.adoc -// * installing/installing_openstack/installing-openstack-installer-custom.adoc -// * installing/installing_openstack/installing-openstack-installer-kuryr.adoc -// * installing/installing_openstack/installing-openstack-installer-restricted.adoc -// * installing/installing_openstack/installing-openstack-user.adoc -// * installing/installing_openstack/installing-openstack-user-kuryr.adoc - - -:_mod-docs-content-type: PROCEDURE -[id="installation-osp-setting-worker-affinity_{context}"] -= Setting compute machine affinity - -Optionally, you can set the affinity policy for compute machines during installation. By default, both compute and control plane machines are created with a `soft-anti-affinity` policy. - -You can also create compute machine sets that use particular {rh-openstack} server groups after installation. - -[TIP] -==== -You can learn more about link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-instance-scheduling-and-placement_scheduling-and-placement[{rh-openstack} instance scheduling and placement] in the {rh-openstack} documentation. -==== - -.Prerequisites - -* Create the `install-config.yaml` file and complete any modifications to it. - -.Procedure - -. Using the {rh-openstack} command-line interface, create a server group for your compute machines. For example: -+ -[source,terminal] ----- -$ openstack \ - --os-compute-api-version=2.15 \ - server group create \ - --policy anti-affinity \ - my-openshift-worker-group ----- -+ -For more information, see the link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/15/html/command_line_interface_reference/server#server_group_create[`server group create` command documentation]. - -. Change to the directory that contains the installation program and create the manifests: -+ -[source,terminal] ----- -$ ./openshift-install create manifests --dir ----- -+ -where: -+ -`installation_directory` :: Specifies the name of the directory that contains the `install-config.yaml` file for your cluster. - -. Open `manifests/99_openshift-cluster-api_worker-machineset-0.yaml`, the `MachineSet` definition file. - -. Add the property `serverGroupID` to the definition beneath the `spec.template.spec.providerSpec.value` property. For example: -+ -[source,yaml] ----- -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - labels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machine-role: - machine.openshift.io/cluster-api-machine-type: - name: - - namespace: openshift-machine-api -spec: - replicas: - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machineset: - - template: - metadata: - labels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machine-role: - machine.openshift.io/cluster-api-machine-type: - machine.openshift.io/cluster-api-machineset: - - spec: - providerSpec: - value: - apiVersion: openstackproviderconfig.openshift.io/v1alpha1 - cloudName: openstack - cloudsSecret: - name: openstack-cloud-credentials - namespace: openshift-machine-api - flavor: - image: - serverGroupID: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee <1> - kind: OpenstackProviderSpec - networks: - - filter: {} - subnets: - - filter: - name: - tags: openshiftClusterID= - securityGroups: - - filter: {} - name: - - serverMetadata: - Name: - - openshiftClusterID: - tags: - - openshiftClusterID= - trunk: true - userDataSecret: - name: -user-data - availabilityZone: ----- -<1> Add the UUID of your server group here. - -. Optional: Back up the `manifests/99_openshift-cluster-api_worker-machineset-0.yaml` file. The installation program deletes the `manifests/` directory when creating the cluster. - -When you install the cluster, the installer uses the `MachineSet` definition that you modified to create compute machines within your {rh-openstack} server group. diff --git a/modules/installation-osp-troubleshooting.adoc b/modules/installation-osp-troubleshooting.adoc deleted file mode 100644 index 8b5bcff20bd9..000000000000 --- a/modules/installation-osp-troubleshooting.adoc +++ /dev/null @@ -1,40 +0,0 @@ -// Module included in the following assemblies: -// -// * n/a - -[id="installation-osp-customizing_{context}"] - -= Troubleshooting {product-title} on OpenStack installations - -// Structure as needed in the end. This is very much a WIP. -// A few more troubleshooting and/or known issues blurbs incoming - -Unfortunately, there will always be some cases where {product-title} fails to install properly. In these events, it is helpful to understand the likely failure modes as well as how to troubleshoot the failure. - -This document discusses some troubleshooting options for {rh-openstack}-based -deployments. For general tips on troubleshooting the installation program, see the [Installer Troubleshooting](../troubleshooting.md) guide. - -== View instance logs - -{rh-openstack} CLI tools must be installed, then: - ----- -$ openstack console log show ----- - -== Connect to instances via SSH - -Get the IP address of the machine on the private network: -``` -openstack server list | grep master -| 0dcd756b-ad80-42f1-987a-1451b1ae95ba | cluster-wbzrr-master-1 | ACTIVE | cluster-wbzrr-openshift=172.24.0.21 | rhcos | m1.s2.xlarge | -| 3b455e43-729b-4e64-b3bd-1d4da9996f27 | cluster-wbzrr-master-2 | ACTIVE | cluster-wbzrr-openshift=172.24.0.18 | rhcos | m1.s2.xlarge | -| 775898c3-ecc2-41a4-b98b-a4cd5ae56fd0 | cluster-wbzrr-master-0 | ACTIVE | cluster-wbzrr-openshift=172.24.0.12 | rhcos | m1.s2.xlarge | -``` - -And connect to it using the control plane machine currently holding the API as a jumpbox: - -``` -ssh -J core@${floating IP address}<1> core@ -``` -<1> The floating IP address assigned to the control plane machine. diff --git a/modules/installing-gitops-operator-in-web-console.adoc b/modules/installing-gitops-operator-in-web-console.adoc deleted file mode 100644 index ccf482d82ee6..000000000000 --- a/modules/installing-gitops-operator-in-web-console.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module is included in the following assemblies: -// -// * /cicd/gitops/installing-openshift-gitops.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installing-gitops-operator-in-web-console_{context}"] -= Installing {gitops-title} Operator in web console - -.Procedure - -. Open the *Administrator* perspective of the web console and navigate to *Operators* → *OperatorHub* in the menu on the left. - -. Search for `OpenShift GitOps`, click the *{gitops-title}* tile, and then click *Install*. -+ -{gitops-title} will be installed in all namespaces of the cluster. - -After the {gitops-title} Operator is installed, it automatically sets up a ready-to-use Argo CD instance that is available in the `openshift-gitops` namespace, and an Argo CD icon is displayed in the console toolbar. -You can create subsequent Argo CD instances for your applications under your projects. diff --git a/modules/installing-gitops-operator-using-cli.adoc b/modules/installing-gitops-operator-using-cli.adoc deleted file mode 100644 index 5187cd65600d..000000000000 --- a/modules/installing-gitops-operator-using-cli.adoc +++ /dev/null @@ -1,60 +0,0 @@ -// Module is included in the following assemblies: -// -// * /cicd/gitops/installing-openshift-gitops.adoc - -:_mod-docs-content-type: PROCEDURE -[id="installing-gitops-operator-using-cli_{context}"] -= Installing {gitops-title} Operator using CLI - -[role="_abstract"] -You can install {gitops-title} Operator from the OperatorHub using the CLI. - -.Procedure - -. Create a Subscription object YAML file to subscribe a namespace to the {gitops-title}, for example, `sub.yaml`: -+ -.Example Subscription -[source,yaml] ----- -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: openshift-gitops-operator - namespace: openshift-operators -spec: - channel: latest <1> - installPlanApproval: Automatic - name: openshift-gitops-operator <2> - source: redhat-operators <3> - sourceNamespace: openshift-marketplace <4> ----- -<1> Specify the channel name from where you want to subscribe the Operator. -<2> Specify the name of the Operator to subscribe to. -<3> Specify the name of the CatalogSource that provides the Operator. -<4> The namespace of the CatalogSource. Use `openshift-marketplace` for the default OperatorHub CatalogSources. -+ -. Apply the `Subscription` to the cluster: -+ -[source,terminal] ----- -$ oc apply -f openshift-gitops-sub.yaml ----- -. After the installation is complete, ensure that all the pods in the `openshift-gitops` namespace are running: -+ -[source,terminal] ----- -$ oc get pods -n openshift-gitops ----- -.Example output -+ -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -cluster-b5798d6f9-zr576 1/1 Running 0 65m -kam-69866d7c48-8nsjv 1/1 Running 0 65m -openshift-gitops-application-controller-0 1/1 Running 0 53m -openshift-gitops-applicationset-controller-6447b8dfdd-5ckgh 1/1 Running 0 65m -openshift-gitops-redis-74bd8d7d96-49bjf 1/1 Running 0 65m -openshift-gitops-repo-server-c999f75d5-l4rsg 1/1 Running 0 65m -openshift-gitops-server-5785f7668b-wj57t 1/1 Running 0 53m ----- \ No newline at end of file diff --git a/modules/ipi-install-configuring-the-metal3-config-file.adoc b/modules/ipi-install-configuring-the-metal3-config-file.adoc deleted file mode 100644 index 8c42d64f8d3a..000000000000 --- a/modules/ipi-install-configuring-the-metal3-config-file.adoc +++ /dev/null @@ -1,52 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc - -:_mod-docs-content-type: PROCEDURE -[id="configuring-the-metal3-config-file_{context}"] -= Configuring the `metal3-config.yaml` file - -You must create and configure a ConfigMap `metal3-config.yaml` file. - -.Procedure - -. Create a ConfigMap `metal3-config.yaml.sample`. -+ ----- -$ vim metal3-config.yaml.sample ----- -+ -Provide the following contents: -+ ----- -apiVersion: v1 -kind: ConfigMap -metadata: - name: metal3-config - namespace: openshift-machine-api -data: - cache_url: '' - deploy_kernel_url: http://:6180/images/ironic-python-agent.kernel - deploy_ramdisk_url: http://:6180/images/ironic-python-agent.initramfs - dhcp_range: 172.22.0.10,172.22.0.100 - http_port: "6180" - ironic_endpoint: http://:6385/v1/ - ironic_inspector_endpoint: http://172.22.0.3:5050/v1/ - provisioning_interface: - provisioning_ip: /24 - rhcos_image_url: ${RHCOS_URI}${RHCOS_PATH} ----- -+ -[NOTE] -==== -Replace `` with an available IP on the `provisioning` network. The default is `172.22.0.3`. -==== - -. Create the final ConfigMap. -+ ----- -$ export COMMIT_ID=$(./openshift-baremetal-install version | grep '^built from commit' | awk '{print $4}') -$ export RHCOS_PATH=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .images.openstack.path | sed 's/"//g') -$ export RHCOS_URI=$(curl -s -S https://raw.githubusercontent.com/openshift/installer/$COMMIT_ID/data/data/rhcos.json | jq .baseURI | sed 's/"//g') -$ envsubst < metal3-config.yaml.sample > metal3-config.yaml ----- diff --git a/modules/lvms-creating-logical-volume-manager-cluster.adoc b/modules/lvms-creating-logical-volume-manager-cluster.adoc deleted file mode 100644 index c9f2f60dc88f..000000000000 --- a/modules/lvms-creating-logical-volume-manager-cluster.adoc +++ /dev/null @@ -1,218 +0,0 @@ -// Module included in the following assemblies: -// -// storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc - -:_mod-docs-content-type: PROCEDURE -[id="lvms-creating-lvms-cluster_{context}"] -= Creating a Logical Volume Manager cluster on a {sno} worker node - -You can configure a {sno} worker node as a Logical Volume Manager cluster. -On the control-plane {sno} node, {lvms} detects and uses the additional worker nodes when the new nodes become active in the cluster. - -[NOTE] -==== -When you create a Logical Volume Manager cluster, `StorageClass` and `LVMVolumeGroup` resources work together to provide dynamic provisioning of storage. -`StorageClass` CRs define the properties of the storage that you can dynamically provision. -`LVMVolumeGroup` is a specific type of persistent volume (PV) that is backed by an LVM Volume Group. -`LVMVolumeGroup` CRs provide the back-end storage for the persistent volumes that you create. -==== - -Perform the following procedure to create a Logical Volume Manager cluster on a {sno} worker node. - -[NOTE] -==== -You also can perform the same task by using the {product-title} web console. -==== - -.Prerequisites - -* You have installed the OpenShift CLI (`oc`). - -* You have logged in as a user with `cluster-admin` privileges. - -* You installed {lvms} in a {sno} cluster and have installed a worker node for use in the {sno} cluster. - -.Procedure - -. Create the `LVMCluster` custom resource (CR). -+ -[IMPORTANT] -===== -You can only create a single instance of the `LVMCluster` custom resource (CR) on an {product-title} cluster. -===== -+ -.. Save the following YAML in the `lvmcluster.yaml` file: -+ -[source,yaml] ----- -apiVersion: lvm.topolvm.io/v1alpha1 -kind: LVMCluster -metadata: - name: lvmcluster -spec: - storage: - deviceClasses: <1> - - name: vg1 - fstype: ext4 <2> - default: true <3> - deviceSelector: <4> - paths: - - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 - optionalPaths: - - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 - thinPoolConfig: - name: thin-pool-1 - sizePercent: 90 - overprovisionRatio: 10 - nodeSelector: <5> - nodeSelectorTerms: - - matchExpressions: - - key: app - operator: In - values: - - test1 ----- -<1> To create multiple device storage classes in the cluster, create a YAML array under `deviceClasses` for each required storage class. -If you add or remove a `deviceClass`, then the update reflects in the cluster only after deleting and recreating the `topolvm-node` pod. -Configure the local device paths of the disks as an array of values in the `deviceSelector` field. -When configuring multiple device classes, you must specify the device path for each device. -<2> Set `fstype` to `ext4` or `xfs`. By default, it is set to `xfs` if the setting is not specified. -<3> Mandatory: The `LVMCluster` resource must contain a single default storage class. Set `default: false` for secondary device storage classes. -If you are upgrading the `LVMCluster` resource from a previous version, you must specify a single default device class. -<4> Optional. To control or restrict the volume group to your preferred devices, you can manually specify the local paths of the devices in the `deviceSelector` section of the `LVMCluster` YAML. The `paths` section refers to devices the `LVMCluster` adds, which means those paths must exist. The `optionalPaths` section refers to devices the `LVMCluster` might add. You must specify at least one of `paths` or `optionalPaths` when specifying the `deviceSelector` section. If you specify `paths`, it is not mandatory to specify `optionalPaths`. If you specify `optionalPaths`, it is not mandatory to specify `paths` but at least one optional path must be present on the node. If you do not specify any paths, then the `LVMCluster` adds the unused devices on the node. After a device is added to the `LVMCluster`, it cannot be removed. -<5> Optional: To control what worker nodes the `LVMCluster` CR is applied to, specify a set of node selector labels. -The specified labels must be present on the node in order for the `LVMCluster` to be scheduled on that node. - -.. Create the `LVMCluster` CR: -+ -[source,terminal] ----- -$ oc create -f lvmcluster.yaml ----- -+ -.Example output -[source,terminal] ----- -lvmcluster/lvmcluster created ----- -+ -The `LVMCluster` resource creates the following system-managed CRs: -+ -`LVMVolumeGroup`:: Tracks individual volume groups across multiple nodes. -`LVMVolumeGroupNodeStatus`:: Tracks the status of the volume groups on a node. - -.Verification - -Verify that the `LVMCluster` resource has created the `StorageClass`, `LVMVolumeGroup`, and `LVMVolumeGroupNodeStatus` CRs. - -[IMPORTANT] -==== -`LVMVolumeGroup` and `LVMVolumeGroupNodeStatus` are managed by {lvms}. Do not edit these CRs directly. -==== - -. Check that the `LVMCluster` CR is in a `ready` state by running the following command: -+ -[source,terminal] ----- -$ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status.deviceClassStatuses[*]}' ----- -+ -.Example output -[source,json] ----- -{ - "name": "vg1", - "nodeStatus": [ - { - "devices": [ - "/dev/nvme0n1", - "/dev/nvme1n1", - "/dev/nvme2n1" - ], - "node": "kube-node", - "status": "Ready" - } - ] -} ----- - -. Check that the storage class is created: -+ -[source,terminal] ----- -$ oc get storageclass ----- -+ -.Example output -[source,terminal] ----- -NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE -lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m ----- - -. Check that the volume snapshot class is created: -+ -[source,terminal] ----- -$ oc get volumesnapshotclass ----- -+ -.Example output -[source,terminal] ----- -NAME DRIVER DELETIONPOLICY AGE -lvms-vg1 topolvm.io Delete 24h ----- - -. Check that the `LVMVolumeGroup` resource is created: -+ -[source,terminal] ----- -$ oc get lvmvolumegroup vg1 -o yaml ----- -+ -.Example output -[source,yaml] ----- -apiVersion: lvm.topolvm.io/v1alpha1 -kind: LVMVolumeGroup -metadata: - creationTimestamp: "2022-02-02T05:16:42Z" - generation: 1 - name: vg1 - namespace: lvm-operator-system - resourceVersion: "17242461" - uid: 88e8ad7d-1544-41fb-9a8e-12b1a66ab157 -spec: {} ----- - -. Check that the `LVMVolumeGroupNodeStatus` resource is created: -+ -[source,terminal] ----- -$ oc get lvmvolumegroupnodestatuses.lvm.topolvm.io kube-node -o yaml ----- -+ -.Example output -[source,yaml] ----- -apiVersion: lvm.topolvm.io/v1alpha1 -kind: LVMVolumeGroupNodeStatus -metadata: - creationTimestamp: "2022-02-02T05:17:59Z" - generation: 1 - name: kube-node - namespace: lvm-operator-system - resourceVersion: "17242882" - uid: 292de9bb-3a9b-4ee8-946a-9b587986dafd -spec: - nodeStatus: - - devices: - - /dev/nvme0n1 - - /dev/nvme1n1 - - /dev/nvme2n1 - name: vg1 - status: Ready ----- diff --git a/modules/lvms-limitations-to-configure-size-of-devices.adoc b/modules/lvms-limitations-to-configure-size-of-devices.adoc deleted file mode 100644 index 54be6ec9de4f..000000000000 --- a/modules/lvms-limitations-to-configure-size-of-devices.adoc +++ /dev/null @@ -1,48 +0,0 @@ -// Module included in the following assemblies: -// -// * storage/persistent_storage/persistent_storage_local/persistent-storage-using-lvms.adoc - -:_mod-docs-content-type: CONCEPT -[id="limitations-to-configure-size-of-devices_{context}"] -= Limitations to configure the size of the devices used in LVM Storage - -The limitations to configure the size of the devices that you can use to provision storage using {lvms} are as follows: - -* The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor. -* The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE). -** You can define the size of PE and LE during the physical and logical device creation. -** The default PE and LE size is 4 MB. -** If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space. - -.Size limits for different architectures using the default PE and LE size -[cols="1,1,1,1,1", width="100%", options="header"] -|==== -|Architecture -|RHEL 6 -|RHEL 7 -|RHEL 8 -|RHEL 9 - -|32-bit -|16 TB -|- -|- -|- - -|64-bit - -|8 EB ^[1]^ - -100 TB ^[2]^ -|8 EB ^[1]^ - -500 TB ^[2]^ -|8 EB -|8 EB - -|==== -[.small] --- -1. Theoretical size. -2. Tested size. --- \ No newline at end of file diff --git a/modules/machine-configs-and-pools.adoc b/modules/machine-configs-and-pools.adoc deleted file mode 100644 index d627624ec571..000000000000 --- a/modules/machine-configs-and-pools.adoc +++ /dev/null @@ -1,75 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="machine-configs-and-pools_{context}"] -= Machine Configs and Machine Config Pools -Machine Config Pools manage a cluster of nodes and their corresponding -Machine Configs. Machine Configs contain configuration information for a -cluster. - -To list all Machine Config Pools that are known: - ----- -$ oc get machineconfigpools -NAME CONFIG UPDATED UPDATING DEGRADED -master master-1638c1aea398413bb918e76632f20799 False False False -worker worker-2feef4f8288936489a5a832ca8efe953 False False False ----- - -To list all Machine Configs: ----- -$ oc get machineconfig -NAME GENERATEDBYCONTROLLER IGNITIONVERSION CREATED OSIMAGEURL -00-master 4.0.0-0.150.0.0-dirty 2.2.0 16m -00-master-ssh 4.0.0-0.150.0.0-dirty 16m -00-worker 4.0.0-0.150.0.0-dirty 2.2.0 16m -00-worker-ssh 4.0.0-0.150.0.0-dirty 16m -01-master-kubelet 4.0.0-0.150.0.0-dirty 2.2.0 16m -01-worker-kubelet 4.0.0-0.150.0.0-dirty 2.2.0 16m -master-1638c1aea398413bb918e76632f20799 4.0.0-0.150.0.0-dirty 2.2.0 16m -worker-2feef4f8288936489a5a832ca8efe953 4.0.0-0.150.0.0-dirty 2.2.0 16m ----- - -To list all KubeletConfigs: - ----- -$ oc get kubeletconfigs ----- - -To get more detailed information about a KubeletConfig, including the reason for -the current condition: - ----- -$ oc describe kubeletconfig ----- - -For example: - ----- -# oc describe kubeletconfig set-max-pods - -Name: set-max-pods <1> -Namespace: -Labels: -Annotations: -API Version: machineconfiguration.openshift.io/v1 -Kind: KubeletConfig -Metadata: - Creation Timestamp: 2019-02-05T16:27:20Z - Generation: 1 - Resource Version: 19694 - Self Link: /apis/machineconfiguration.openshift.io/v1/kubeletconfigs/set-max-pods - UID: e8ee6410-2962-11e9-9bcc-664f163f5f0f -Spec: - Kubelet Config: <2> - Max Pods: 100 - Machine Config Pool Selector: <3> - Match Labels: - Custom - Kubelet: small-pods -Events: ----- - -<1> The name of the KubeletConfig. -<2> The user defined configuration. -<3> The Machine Config Pool selector to apply the KubeletConfig to. \ No newline at end of file diff --git a/modules/machineset-osp-adding-bare-metal.adoc b/modules/machineset-osp-adding-bare-metal.adoc deleted file mode 100644 index dcb83a980174..000000000000 --- a/modules/machineset-osp-adding-bare-metal.adoc +++ /dev/null @@ -1,95 +0,0 @@ -:_mod-docs-content-type: PROCEDURE -[id="machineset-osp-adding-bare-metal_{context}"] -= Adding bare-metal compute machines to a {rh-openstack} cluster - -// TODO -// Mothballed -// Reintroduce when feature is available. -You can add bare-metal compute machines to an {product-title} cluster after you deploy it -on {rh-openstack-first}. In this configuration, all machines are attached to an -existing, installer-provisioned network, and traffic between control plane and -compute machines is routed between subnets. - -[NOTE] -==== -Bare-metal compute machines are not supported on clusters that use Kuryr. -==== - -.Prerequisites - -* The {rh-openstack} link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html/bare_metal_provisioning/index[Bare Metal service (Ironic)] is enabled and accessible by using the {rh-openstack} Compute API. - -* Bare metal is available as link:https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/bare_metal_provisioning/assembly_configuring-the-bare-metal-provisioning-service-after-deployment#proc_creating-flavors-for-launching-bare-metal-instances_bare-metal-post-deployment[an {rh-openstack} flavor]. - -* You deployed an {product-title} cluster on installer-provisioned infrastructure. - -* Your {rh-openstack} cloud provider is configured to route traffic between the installer-created VM -subnet and the pre-existing bare metal subnet. - -.Procedure -. Create a file called `baremetalMachineSet.yaml`, and then add the bare metal flavor to it: -+ -FIXME: May require update before publication. -.A sample bare metal MachineSet file -[source,yaml] ----- -apiVersion: machine.openshift.io/v1beta1 -kind: MachineSet -metadata: - labels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machine-role: - machine.openshift.io/cluster-api-machine-type: - name: - - namespace: openshift-machine-api -spec: - replicas: - selector: - matchLabels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machineset: - - template: - metadata: - labels: - machine.openshift.io/cluster-api-cluster: - machine.openshift.io/cluster-api-machine-role: - machine.openshift.io/cluster-api-machine-type: - machine.openshift.io/cluster-api-machineset: - - spec: - providerSpec: - value: - apiVersion: openstackproviderconfig.openshift.io/v1alpha1 - cloudName: openstack - cloudsSecret: - name: openstack-cloud-credentials - namespace: openshift-machine-api - flavor: - image: - kind: OpenstackProviderSpec - networks: - - filter: {} - subnets: - - filter: - name: - tags: openshiftClusterID= - securityGroups: - - filter: {} - name: - - serverMetadata: - Name: - - openshiftClusterID: - tags: - - openshiftClusterID= - trunk: true - userDataSecret: - name: -user-data ----- - -. On a command line, to create the MachineSet resource, type: -+ -[source,terminal] ----- -$ oc create -v baremetalMachineSet.yaml ----- - -You can now use bare-metal compute machines in your {product-title} cluster. diff --git a/modules/metering-cluster-capacity-examples.adoc b/modules/metering-cluster-capacity-examples.adoc deleted file mode 100644 index 3bc78d3e5d22..000000000000 --- a/modules/metering-cluster-capacity-examples.adoc +++ /dev/null @@ -1,48 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-usage-examples.adoc - -[id="metering-cluster-capacity-examples_{context}"] -= Measure cluster capacity hourly and daily - -The following report demonstrates how to measure cluster capacity both hourly and daily. The daily report works by aggregating the hourly report's results. - -The following report measures cluster CPU capacity every hour. - -.Hourly CPU capacity by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-capacity-hourly -spec: - query: "cluster-cpu-capacity" - schedule: - period: "hourly" <1> ----- -<1> You could change this period to `daily` to get a daily report, but with larger data sets it is more efficient to use an hourly report, then aggregate your hourly data into a daily report. - -The following report aggregates the hourly data into a daily report. - -.Daily CPU capacity by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-capacity-daily <1> -spec: - query: "cluster-cpu-capacity" <2> - inputs: <3> - - name: ClusterCpuCapacityReportName - value: cluster-cpu-capacity-hourly - schedule: - period: "daily" ----- - -<1> To stay organized, remember to change the `name` of your report if you change any of the other values. -<2> You can also measure `cluster-memory-capacity`. Remember to update the query in the associated hourly report as well. -<3> The `inputs` section configures this report to aggregate the hourly report. Specifically, `value: cluster-cpu-capacity-hourly` is the name of the hourly report that gets aggregated. diff --git a/modules/metering-cluster-usage-examples.adoc b/modules/metering-cluster-usage-examples.adoc deleted file mode 100644 index ed6188e8ca3b..000000000000 --- a/modules/metering-cluster-usage-examples.adoc +++ /dev/null @@ -1,27 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-usage-examples.adoc - -[id="metering-cluster-usage-examples_{context}"] -= Measure cluster usage with a one-time report - -The following report measures cluster usage from a specific starting date forward. The report only runs once, after you save it and apply it. - -.CPU usage by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-usage-2020 <1> -spec: - reportingStart: '2020-01-01T00:00:00Z' <2> - reportingEnd: '2020-12-30T23:59:59Z' - query: cluster-cpu-usage <3> - runImmediately: true <4> ----- -<1> To stay organized, remember to change the `name` of your report if you change any of the other values. -<2> Configures the report to start using data from the `reportingStart` timestamp until the `reportingEnd` timestamp. -<3> Adjust your query here. You can also measure cluster usage with the `cluster-memory-usage` query. -<4> Configures the report to run immediately after saving it and applying it. diff --git a/modules/metering-cluster-utilization-examples.adoc b/modules/metering-cluster-utilization-examples.adoc deleted file mode 100644 index 4c1856b5217f..000000000000 --- a/modules/metering-cluster-utilization-examples.adoc +++ /dev/null @@ -1,26 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-usage-examples.adoc - -[id="metering-cluster-utilization-examples_{context}"] -= Measure cluster utilization using cron expressions - -You can also use cron expressions when configuring the period of your reports. The following report measures cluster utilization by looking at CPU utilization from 9am-5pm every weekday. - -.Weekday CPU utilization by cluster example - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: cluster-cpu-utilization-weekdays <1> -spec: - query: "cluster-cpu-utilization" <2> - schedule: - period: "cron" - expression: 0 0 * * 1-5 <3> ----- -<1> To say organized, remember to change the `name` of your report if you change any of the other values. -<2> Adjust your query here. You can also measure cluster utilization with the `cluster-memory-utilization` query. -<3> For cron periods, normal cron expressions are valid. diff --git a/modules/metering-configure-persistentvolumes.adoc b/modules/metering-configure-persistentvolumes.adoc deleted file mode 100644 index 418782ec8b2b..000000000000 --- a/modules/metering-configure-persistentvolumes.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-hive-metastore.adoc - -[id="metering-configure-persistentvolumes_{context}"] -= Configuring persistent volumes - -By default, Hive requires one persistent volume to operate. - -`hive-metastore-db-data` is the main persistent volume claim (PVC) required by default. This PVC is used by the Hive metastore to store metadata about tables, such as table name, columns, and location. Hive metastore is used by Presto and the Hive server to look up table metadata when processing queries. You remove this requirement by using MySQL or PostgreSQL for the Hive metastore database. - -To install, Hive metastore requires that dynamic volume provisioning is enabled in a storage class, a persistent volume of the correct size must be manually pre-created, or you use a pre-existing MySQL or PostgreSQL database. - -[id="metering-configure-persistentvolumes-storage-class-hive_{context}"] -== Configuring the storage class for the Hive metastore -To configure and specify a storage class for the `hive-metastore-db-data` persistent volume claim, specify the storage class in your `MeteringConfig` custom resource. An example `storage` section with the `class` field is included in the `metastore-storage.yaml` file below. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - hive: - spec: - metastore: - storage: - # Default is null, which means using the default storage class if it exists. - # If you wish to use a different storage class, specify it here - # class: "null" <1> - size: "5Gi" ----- -<1> Uncomment this line and replace `null` with the name of the storage class to use. Leaving the value `null` will cause metering to use the default storage class for the cluster. - -[id="metering-configure-persistentvolumes-volume-size-hive_{context}"] -== Configuring the volume size for the Hive metastore - -Use the `metastore-storage.yaml` file below as a template to configure the volume size for the Hive metastore. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - hive: - spec: - metastore: - storage: - # Default is null, which means using the default storage class if it exists. - # If you wish to use a different storage class, specify it here - # class: "null" - size: "5Gi" <1> ----- -<1> Replace the value for `size` with your desired capacity. The example file shows "5Gi". diff --git a/modules/metering-debugging.adoc b/modules/metering-debugging.adoc deleted file mode 100644 index dab3a52a1eb4..000000000000 --- a/modules/metering-debugging.adoc +++ /dev/null @@ -1,228 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-troubleshooting-debugging.adoc - -[id="metering-debugging_{context}"] -= Debugging metering - -Debugging metering is much easier when you interact directly with the various components. The sections below detail how you can connect and query Presto and Hive as well as view the dashboards of the Presto and HDFS components. - -[NOTE] -==== -All of the commands in this section assume you have installed metering through OperatorHub in the `openshift-metering` namespace. -==== - -[id="metering-get-reporting-operator-logs_{context}"] -== Get reporting operator logs -Use the command below to follow the logs of the `reporting-operator`: - -[source,terminal] ----- -$ oc -n openshift-metering logs -f "$(oc -n openshift-metering get pods -l app=reporting-operator -o name | cut -c 5-)" -c reporting-operator ----- - -[id="metering-query-presto-using-presto-cli_{context}"] -== Query Presto using presto-cli -The following command opens an interactive presto-cli session where you can query Presto. This session runs in the same container as Presto and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Presto pod. - -By default, Presto is configured to communicate using TLS. You must use the following command to run Presto queries: - -[source,terminal] ----- -$ oc -n openshift-metering exec -it "$(oc -n openshift-metering get pods -l app=presto,presto=coordinator -o name | cut -d/ -f2)" \ - -- /usr/local/bin/presto-cli --server https://presto:8080 --catalog hive --schema default --user root --keystore-path /opt/presto/tls/keystore.pem ----- - -Once you run this command, a prompt appears where you can run queries. Use the `show tables from metering;` query to view the list of tables: - -[source,terminal] ----- -$ presto:default> show tables from metering; ----- - -.Example output -[source,terminal] ----- - Table - - datasource_your_namespace_cluster_cpu_capacity_raw - datasource_your_namespace_cluster_cpu_usage_raw - datasource_your_namespace_cluster_memory_capacity_raw - datasource_your_namespace_cluster_memory_usage_raw - datasource_your_namespace_node_allocatable_cpu_cores - datasource_your_namespace_node_allocatable_memory_bytes - datasource_your_namespace_node_capacity_cpu_cores - datasource_your_namespace_node_capacity_memory_bytes - datasource_your_namespace_node_cpu_allocatable_raw - datasource_your_namespace_node_cpu_capacity_raw - datasource_your_namespace_node_memory_allocatable_raw - datasource_your_namespace_node_memory_capacity_raw - datasource_your_namespace_persistentvolumeclaim_capacity_bytes - datasource_your_namespace_persistentvolumeclaim_capacity_raw - datasource_your_namespace_persistentvolumeclaim_phase - datasource_your_namespace_persistentvolumeclaim_phase_raw - datasource_your_namespace_persistentvolumeclaim_request_bytes - datasource_your_namespace_persistentvolumeclaim_request_raw - datasource_your_namespace_persistentvolumeclaim_usage_bytes - datasource_your_namespace_persistentvolumeclaim_usage_raw - datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw - datasource_your_namespace_pod_cpu_request_raw - datasource_your_namespace_pod_cpu_usage_raw - datasource_your_namespace_pod_limit_cpu_cores - datasource_your_namespace_pod_limit_memory_bytes - datasource_your_namespace_pod_memory_request_raw - datasource_your_namespace_pod_memory_usage_raw - datasource_your_namespace_pod_persistentvolumeclaim_request_info - datasource_your_namespace_pod_request_cpu_cores - datasource_your_namespace_pod_request_memory_bytes - datasource_your_namespace_pod_usage_cpu_cores - datasource_your_namespace_pod_usage_memory_bytes -(32 rows) - -Query 20210503_175727_00107_3venm, FINISHED, 1 node -Splits: 19 total, 19 done (100.00%) -0:02 [32 rows, 2.23KB] [19 rows/s, 1.37KB/s] - -presto:default> ----- - -[id="metering-query-hive-using-beeline_{context}"] -== Query Hive using beeline -The following opens an interactive beeline session where you can query Hive. This session runs in the same container as Hive and launches an additional Java instance, which can create memory limits for the pod. If this occurs, you should increase the memory request and limits of the Hive pod. - -[source,terminal] ----- -$ oc -n openshift-metering exec -it $(oc -n openshift-metering get pods -l app=hive,hive=server -o name | cut -d/ -f2) \ - -c hiveserver2 -- beeline -u 'jdbc:hive2://127.0.0.1:10000/default;auth=noSasl' ----- - -Once you run this command, a prompt appears where you can run queries. Use the `show tables;` query to view the list of tables: - -[source,terminal] ----- -$ 0: jdbc:hive2://127.0.0.1:10000/default> show tables from metering; ----- - -.Example output -[source,terminal] ----- -+----------------------------------------------------+ -| tab_name | -+----------------------------------------------------+ -| datasource_your_namespace_cluster_cpu_capacity_raw | -| datasource_your_namespace_cluster_cpu_usage_raw | -| datasource_your_namespace_cluster_memory_capacity_raw | -| datasource_your_namespace_cluster_memory_usage_raw | -| datasource_your_namespace_node_allocatable_cpu_cores | -| datasource_your_namespace_node_allocatable_memory_bytes | -| datasource_your_namespace_node_capacity_cpu_cores | -| datasource_your_namespace_node_capacity_memory_bytes | -| datasource_your_namespace_node_cpu_allocatable_raw | -| datasource_your_namespace_node_cpu_capacity_raw | -| datasource_your_namespace_node_memory_allocatable_raw | -| datasource_your_namespace_node_memory_capacity_raw | -| datasource_your_namespace_persistentvolumeclaim_capacity_bytes | -| datasource_your_namespace_persistentvolumeclaim_capacity_raw | -| datasource_your_namespace_persistentvolumeclaim_phase | -| datasource_your_namespace_persistentvolumeclaim_phase_raw | -| datasource_your_namespace_persistentvolumeclaim_request_bytes | -| datasource_your_namespace_persistentvolumeclaim_request_raw | -| datasource_your_namespace_persistentvolumeclaim_usage_bytes | -| datasource_your_namespace_persistentvolumeclaim_usage_raw | -| datasource_your_namespace_persistentvolumeclaim_usage_with_phase_raw | -| datasource_your_namespace_pod_cpu_request_raw | -| datasource_your_namespace_pod_cpu_usage_raw | -| datasource_your_namespace_pod_limit_cpu_cores | -| datasource_your_namespace_pod_limit_memory_bytes | -| datasource_your_namespace_pod_memory_request_raw | -| datasource_your_namespace_pod_memory_usage_raw | -| datasource_your_namespace_pod_persistentvolumeclaim_request_info | -| datasource_your_namespace_pod_request_cpu_cores | -| datasource_your_namespace_pod_request_memory_bytes | -| datasource_your_namespace_pod_usage_cpu_cores | -| datasource_your_namespace_pod_usage_memory_bytes | -+----------------------------------------------------+ -32 rows selected (13.101 seconds) -0: jdbc:hive2://127.0.0.1:10000/default> ----- - -[id="metering-port-forward-hive-web-ui_{context}"] -== Port-forward to the Hive web UI -Run the following command to port-forward to the Hive web UI: - -[source,terminal] ----- -$ oc -n openshift-metering port-forward hive-server-0 10002 ----- - -You can now open http://127.0.0.1:10002 in your browser window to view the Hive web interface. - -[id="metering-port-forward-hdfs_{context}"] -== Port-forward to HDFS -Run the following command to port-forward to the HDFS namenode: - -[source,terminal] ----- -$ oc -n openshift-metering port-forward hdfs-namenode-0 9870 ----- - -You can now open http://127.0.0.1:9870 in your browser window to view the HDFS web interface. - -Run the following command to port-forward to the first HDFS datanode: - -[source,terminal] ----- -$ oc -n openshift-metering port-forward hdfs-datanode-0 9864 <1> ----- -<1> To check other datanodes, replace `hdfs-datanode-0` with the pod you want to view information on. - -[id="metering-ansible-operator_{context}"] -== Metering Ansible Operator -Metering uses the Ansible Operator to watch and reconcile resources in a cluster environment. When debugging a failed metering installation, it can be helpful to view the Ansible logs or status of your `MeteringConfig` custom resource. - -[id="metering-accessing-ansible-logs_{context}"] -=== Accessing Ansible logs -In the default installation, the Metering Operator is deployed as a pod. In this case, you can check the logs of the Ansible container within this pod: - -[source,terminal] ----- -$ oc -n openshift-metering logs $(oc -n openshift-metering get pods -l app=metering-operator -o name | cut -d/ -f2) -c ansible ----- - -Alternatively, you can view the logs of the Operator container (replace `-c ansible` with `-c operator`) for condensed output. - -[id="metering-checking-meteringconfig-status_{context}"] -=== Checking the MeteringConfig Status -It can be helpful to view the `.status` field of your `MeteringConfig` custom resource to debug any recent failures. The following command shows status messages with type `Invalid`: - -[source,terminal] ----- -$ oc -n openshift-metering get meteringconfig operator-metering -o=jsonpath='{.status.conditions[?(@.type=="Invalid")].message}' ----- -// $ oc -n openshift-metering get meteringconfig operator-metering -o json | jq '.status' - -[id="metering-checking-meteringconfig-events_{context}"] -=== Checking MeteringConfig Events -Check events that the Metering Operator is generating. This can be helpful during installation or upgrade to debug any resource failures. Sort events by the last timestamp: - -[source,terminal] ----- -$ oc -n openshift-metering get events --field-selector involvedObject.kind=MeteringConfig --sort-by='.lastTimestamp' ----- - -.Example output with latest changes in the MeteringConfig resources -[source,terminal] ----- -LAST SEEN TYPE REASON OBJECT MESSAGE -4m40s Normal Validating meteringconfig/operator-metering Validating the user-provided configuration -4m30s Normal Started meteringconfig/operator-metering Configuring storage for the metering-ansible-operator -4m26s Normal Started meteringconfig/operator-metering Configuring TLS for the metering-ansible-operator -3m58s Normal Started meteringconfig/operator-metering Configuring reporting for the metering-ansible-operator -3m53s Normal Reconciling meteringconfig/operator-metering Reconciling metering resources -3m47s Normal Reconciling meteringconfig/operator-metering Reconciling monitoring resources -3m41s Normal Reconciling meteringconfig/operator-metering Reconciling HDFS resources -3m23s Normal Reconciling meteringconfig/operator-metering Reconciling Hive resources -2m59s Normal Reconciling meteringconfig/operator-metering Reconciling Presto resources -2m35s Normal Reconciling meteringconfig/operator-metering Reconciling reporting-operator resources -2m14s Normal Reconciling meteringconfig/operator-metering Reconciling reporting resources ----- diff --git a/modules/metering-exposing-the-reporting-api.adoc b/modules/metering-exposing-the-reporting-api.adoc deleted file mode 100644 index 4ee62a5184af..000000000000 --- a/modules/metering-exposing-the-reporting-api.adoc +++ /dev/null @@ -1,159 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-reporting-operator.adoc - -[id="metering-exposing-the-reporting-api_{context}"] -= Exposing the reporting API - -On {product-title} the default metering installation automatically exposes a route, making the reporting API available. This provides the following features: - -* Automatic DNS -* Automatic TLS based on the cluster CA - -Also, the default installation makes it possible to use the {product-title} service for serving certificates to protect the reporting API with TLS. The {product-title} OAuth proxy is deployed as a sidecar container for the Reporting Operator, which protects the reporting API with authentication. - -[id="metering-openshift-authentication_{context}"] -== Using {product-title} Authentication - -By default, the reporting API is secured with TLS and authentication. This is done by configuring the Reporting Operator to deploy a pod containing both the Reporting Operator's container, and a sidecar container running {product-title} auth-proxy. - -To access the reporting API, the Metering Operator exposes a route. After that route has been installed, you can run the following command to get the route's hostname. - -[source,terminal] ----- -$ METERING_ROUTE_HOSTNAME=$(oc -n openshift-metering get routes metering -o json | jq -r '.status.ingress[].host') ----- - -Next, set up authentication using either a service account token or basic authentication with a username and password. - -[id="metering-authenticate-using-service-account_{context}"] -=== Authenticate using a service account token -With this method, you use the token in the Reporting Operator's service account, and pass that bearer token to the Authorization header in the following command: - -[source,terminal] ----- -$ TOKEN=$(oc -n openshift-metering serviceaccounts get-token reporting-operator) -curl -H "Authorization: Bearer $TOKEN" -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]" ----- - -Be sure to replace the `name=[Report Name]` and `format=[Format]` parameters in the URL above. The `format` parameter can be json, csv, or tabular. - -[id="metering-authenticate-using-username-password_{context}"] -=== Authenticate using a username and password - -Metering supports configuring basic authentication using a username and password combination, which is specified in the contents of an htpasswd file. By default, a secret containing empty htpasswd data is created. You can, however, configure the `reporting-operator.spec.authProxy.htpasswd.data` and `reporting-operator.spec.authProxy.htpasswd.createSecret` keys to use this method. - -Once you have specified the above in your `MeteringConfig` resource, you can run the following command: - -[source,terminal] ----- -$ curl -u testuser:password123 -k "https://$METERING_ROUTE_HOSTNAME/api/v1/reports/get?name=[Report Name]&namespace=openshift-metering&format=[Format]" ----- - -Be sure to replace `testuser:password123` with a valid username and password combination. - -[id="metering-manually-configure-authentication_{context}"] -== Manually Configuring Authentication -To manually configure, or disable OAuth in the Reporting Operator, you must set `spec.tls.enabled: false` in your `MeteringConfig` resource. - -[WARNING] -==== -This also disables all TLS and authentication between the Reporting Operator, Presto, and Hive. You would need to manually configure these resources yourself. -==== - -Authentication can be enabled by configuring the following options. Enabling authentication configures the Reporting Operator pod to run the {product-title} auth-proxy as a sidecar container in the pod. This adjusts the ports so that the reporting API isn't exposed directly, but instead is proxied to via the auth-proxy sidecar container. - -* `reporting-operator.spec.authProxy.enabled` -* `reporting-operator.spec.authProxy.cookie.createSecret` -* `reporting-operator.spec.authProxy.cookie.seed` - -You need to set `reporting-operator.spec.authProxy.enabled` and `reporting-operator.spec.authProxy.cookie.createSecret` to `true` and `reporting-operator.spec.authProxy.cookie.seed` to a 32-character random string. - -You can generate a 32-character random string using the following command. - -[source,terminal] ----- -$ openssl rand -base64 32 | head -c32; echo. ----- - -[id="metering-token-authentication_{context}"] -=== Token authentication - -When the following options are set to `true`, authentication using a bearer token is enabled for the reporting REST API. Bearer tokens can come from service accounts or users. - -* `reporting-operator.spec.authProxy.subjectAccessReview.enabled` -* `reporting-operator.spec.authProxy.delegateURLs.enabled` - -When authentication is enabled, the Bearer token used to query the reporting API of the user or service account must be granted access using one of the following roles: - -* report-exporter -* reporting-admin -* reporting-viewer -* metering-admin -* metering-viewer - -The Metering Operator is capable of creating role bindings for you, granting these permissions by specifying a list of subjects in the `spec.permissions` section. For an example, see the following `advanced-auth.yaml` example configuration. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - permissions: - # anyone in the "metering-admins" group can create, update, delete, etc any - # metering.openshift.io resources in the namespace. - # This also grants permissions to get query report results from the reporting REST API. - meteringAdmins: - - kind: Group - name: metering-admins - # Same as above except read only access and for the metering-viewers group. - meteringViewers: - - kind: Group - name: metering-viewers - # the default serviceaccount in the namespace "my-custom-ns" can: - # create, update, delete, etc reports. - # This also gives permissions query the results from the reporting REST API. - reportingAdmins: - - kind: ServiceAccount - name: default - namespace: my-custom-ns - # anyone in the group reporting-readers can get, list, watch reports, and - # query report results from the reporting REST API. - reportingViewers: - - kind: Group - name: reporting-readers - # anyone in the group cluster-admins can query report results - # from the reporting REST API. So can the user bob-from-accounting. - reportExporters: - - kind: Group - name: cluster-admins - - kind: User - name: bob-from-accounting - - reporting-operator: - spec: - authProxy: - # htpasswd.data can contain htpasswd file contents for allowing auth - # using a static list of usernames and their password hashes. - # - # username is 'testuser' password is 'password123' - # generated htpasswdData using: `htpasswd -nb -s testuser password123` - # htpasswd: - # data: | - # testuser:{SHA}y/2sYAj5yrQIN4TL0YdPdmGNKpc= - # - # change REPLACEME to the output of your htpasswd command - htpasswd: - data: | - REPLACEME ----- - -Alternatively, you can use any role which has rules granting `get` permissions to `reports/export`. This means `get` access to the `export` sub-resource of the `Report` resources in the namespace of the Reporting Operator. For example: `admin` and `cluster-admin`. - -By default, the Reporting Operator and Metering Operator service accounts both have these permissions, and their tokens can be used for authentication. - -[id="metering-basic-authentication_{context}"] -=== Basic authentication with a username and password -For basic authentication you can supply a username and password in the `reporting-operator.spec.authProxy.htpasswd.data` field. The username and password must be the same format as those found in an htpasswd file. When set, you can use HTTP basic authentication to provide your username and password that has a corresponding entry in the `htpasswdData` contents. diff --git a/modules/metering-install-operator.adoc b/modules/metering-install-operator.adoc deleted file mode 100644 index daeee992cc76..000000000000 --- a/modules/metering-install-operator.adoc +++ /dev/null @@ -1,133 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-install-operator_{context}"] -= Installing the Metering Operator - -You can install metering by deploying the Metering Operator. The Metering Operator creates and manages the components of the metering stack. - -[NOTE] -==== -You cannot create a project starting with `openshift-` using the web console or by using the `oc new-project` command in the CLI. -==== - -[NOTE] -==== -If the Metering Operator is installed using a namespace other than `openshift-metering`, the metering reports are only viewable using the CLI. It is strongly suggested throughout the installation steps to use the `openshift-metering` namespace. -==== - -[id="metering-install-web-console_{context}"] -== Installing metering using the web console -You can use the {product-title} web console to install the Metering Operator. - -.Procedure - -. Create a namespace object YAML file for the Metering Operator with the `oc create -f .yaml` command. You must use the CLI to create the namespace. For example, `metering-namespace.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-metering <1> - annotations: - openshift.io/node-selector: "" <2> - labels: - openshift.io/cluster-monitoring: "true" ----- -<1> It is strongly recommended to deploy metering in the `openshift-metering` namespace. -<2> Include this annotation before configuring specific node selectors for the operand pods. - -. In the {product-title} web console, click *Operators* -> *OperatorHub*. Filter for `metering` to find the Metering Operator. - -. Click the *Metering* card, review the package description, and then click *Install*. -. Select an *Update Channel*, *Installation Mode*, and *Approval Strategy*. -. Click *Install*. - -. Verify that the Metering Operator is installed by switching to the *Operators* -> *Installed Operators* page. The Metering Operator has a *Status* of *Succeeded* when the installation is complete. -+ -[NOTE] -==== -It might take several minutes for the Metering Operator to appear. -==== - -. Click *Metering* on the *Installed Operators* page for Operator *Details*. From the *Details* page you can create different resources related to metering. - -To complete the metering installation, create a `MeteringConfig` resource to configure metering and install the components of the metering stack. - -[id="metering-install-cli_{context}"] -== Installing metering using the CLI - -You can use the {product-title} CLI to install the Metering Operator. - -.Procedure - -. Create a `Namespace` object YAML file for the Metering Operator. You must use the CLI to create the namespace. For example, `metering-namespace.yaml`: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-metering <1> - annotations: - openshift.io/node-selector: "" <2> - labels: - openshift.io/cluster-monitoring: "true" ----- -<1> It is strongly recommended to deploy metering in the `openshift-metering` namespace. -<2> Include this annotation before configuring specific node selectors for the operand pods. - -. Create the `Namespace` object: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- -+ -For example: -+ -[source,terminal] ----- -$ oc create -f openshift-metering.yaml ----- - -. Create the `OperatorGroup` object YAML file. For example, `metering-og`: -+ -[source,yaml] ----- -apiVersion: operators.coreos.com/v1 -kind: OperatorGroup -metadata: - name: openshift-metering <1> - namespace: openshift-metering <2> -spec: - targetNamespaces: - - openshift-metering ----- -<1> The name is arbitrary. -<2> Specify the `openshift-metering` namespace. - -. Create a `Subscription` object YAML file to subscribe a namespace to the Metering Operator. This object targets the most recently released version in the `redhat-operators` catalog source. For example, `metering-sub.yaml`: -+ -[source,yaml, subs="attributes+"] ----- -apiVersion: operators.coreos.com/v1alpha1 -kind: Subscription -metadata: - name: metering-ocp <1> - namespace: openshift-metering <2> -spec: - channel: "{product-version}" <3> - source: "redhat-operators" <4> - sourceNamespace: "openshift-marketplace" - name: "metering-ocp" - installPlanApproval: "Automatic" <5> ----- -<1> The name is arbitrary. -<2> You must specify the `openshift-metering` namespace. -<3> Specify {product-version} as the channel. -<4> Specify the `redhat-operators` catalog source, which contains the `metering-ocp` package manifests. If your {product-title} is installed on a restricted network, also known as a disconnected cluster, specify the name of the `CatalogSource` object you created when you configured the Operator LifeCycle Manager (OLM). -<5> Specify "Automatic" install plan approval. diff --git a/modules/metering-install-prerequisites.adoc b/modules/metering-install-prerequisites.adoc deleted file mode 100644 index 293f9f55b897..000000000000 --- a/modules/metering-install-prerequisites.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adoc - -[id="metering-install-prerequisites_{context}"] -= Prerequisites - -Metering requires the following components: - -* A `StorageClass` resource for dynamic volume provisioning. Metering supports a number of different storage solutions. -* 4GB memory and 4 CPU cores available cluster capacity and at least one node with 2 CPU cores and 2GB memory capacity available. -* The minimum resources needed for the largest single pod installed by metering are 2GB of memory and 2 CPU cores. -** Memory and CPU consumption may often be lower, but will spike when running reports, or collecting data for larger clusters. diff --git a/modules/metering-install-verify.adoc b/modules/metering-install-verify.adoc deleted file mode 100644 index 9b575dfa7962..000000000000 --- a/modules/metering-install-verify.adoc +++ /dev/null @@ -1,95 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adc - -[id="metering-install-verify_{context}"] -= Verifying the metering installation - -You can verify the metering installation by performing any of the following checks: - -* Check the Metering Operator `ClusterServiceVersion` (CSV) resource for the metering version. This can be done through either the web console or CLI. -+ --- -.Procedure (UI) - . Navigate to *Operators* -> *Installed Operators* in the `openshift-metering` namespace. - . Click *Metering Operator*. - . Click *Subscription* for *Subscription Details*. - . Check the *Installed Version*. - -.Procedure (CLI) -* Check the Metering Operator CSV in the `openshift-metering` namespace: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering get csv ----- -+ -.Example output -[source,terminal,subs="attributes+"] ----- -NAME DISPLAY VERSION REPLACES PHASE -elasticsearch-operator.{product-version}.0-202006231303.p0 OpenShift Elasticsearch Operator {product-version}.0-202006231303.p0 Succeeded -metering-operator.v{product-version}.0 Metering {product-version}.0 Succeeded ----- --- - -* Check that all required pods in the `openshift-metering` namespace are created. This can be done through either the web console or CLI. -+ --- -[NOTE] -==== -Many pods rely on other components to function before they themselves can be considered ready. Some pods may restart if other pods take too long to start. This is to be expected during the Metering Operator installation. -==== - -.Procedure (UI) -* Navigate to *Workloads* -> *Pods* in the metering namespace and verify that pods are being created. This can take several minutes after installing the metering stack. - -.Procedure (CLI) -* Check that all required pods in the `openshift-metering` namespace are created: -+ -[source,terminal] ----- -$ oc -n openshift-metering get pods ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -hive-metastore-0 2/2 Running 0 3m28s -hive-server-0 3/3 Running 0 3m28s -metering-operator-68dd64cfb6-2k7d9 2/2 Running 0 5m17s -presto-coordinator-0 2/2 Running 0 3m9s -reporting-operator-5588964bf8-x2tkn 2/2 Running 0 2m40s ----- --- - -* Verify that the `ReportDataSource` resources are beginning to import data, indicated by a valid timestamp in the `EARLIEST METRIC` column. This might take several minutes. Filter out the "-raw" `ReportDataSource` resources, which do not import data: -+ -[source,terminal] ----- -$ oc get reportdatasources -n openshift-metering | grep -v raw ----- -+ -.Example output -[source,terminal] ----- -NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE -node-allocatable-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T16:52:00Z 2019-08-05T18:52:00Z 2019-08-05T18:54:45Z 9m50s -node-allocatable-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T16:51:00Z 2019-08-05T18:51:00Z 2019-08-05T18:54:45Z 9m50s -node-capacity-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:39Z 9m50s -node-capacity-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T16:52:00Z 2019-08-05T18:41:00Z 2019-08-05T18:54:44Z 9m50s -persistentvolumeclaim-capacity-bytes 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:43Z 9m50s -persistentvolumeclaim-phase 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T16:51:00Z 2019-08-05T18:29:00Z 2019-08-05T18:54:28Z 9m50s -persistentvolumeclaim-request-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:34Z 9m50s -persistentvolumeclaim-usage-bytes 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:36Z 9m49s -pod-limit-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T16:52:00Z 2019-08-05T18:30:00Z 2019-08-05T18:54:26Z 9m49s -pod-limit-memory-bytes 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:30Z 9m49s -pod-persistentvolumeclaim-request-info 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T16:51:00Z 2019-08-05T18:40:00Z 2019-08-05T18:54:37Z 9m49s -pod-request-cpu-cores 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T16:51:00Z 2019-08-05T18:18:00Z 2019-08-05T18:54:24Z 9m49s -pod-request-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:32Z 9m49s -pod-usage-cpu-cores 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T16:52:00Z 2019-08-05T17:57:00Z 2019-08-05T18:54:10Z 9m49s -pod-usage-memory-bytes 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T16:52:00Z 2019-08-05T18:08:00Z 2019-08-05T18:54:20Z 9m49s ----- - -After all pods are ready and you have verified that data is being imported, you can begin using metering to collect data and report on your cluster. diff --git a/modules/metering-overview.adoc b/modules/metering-overview.adoc deleted file mode 100644 index abb20ed8c452..000000000000 --- a/modules/metering-overview.adoc +++ /dev/null @@ -1,33 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-installing-metering.adoc -// * metering/metering-using-metering.adoc - -[id="metering-overview_{context}"] -= Metering overview - -Metering is a general purpose data analysis tool that enables you to write reports to process data from different data sources. As a cluster administrator, you can use metering to analyze what is happening in your cluster. You can either write your own, or use predefined SQL queries to define how you want to process data from the different data sources you have available. - -Metering focuses primarily on in-cluster metric data using Prometheus as a default data source, enabling users of metering to do reporting on pods, namespaces, and most other Kubernetes resources. - -You can install metering on {product-title} 4.x clusters and above. - -[id="metering-resources_{context}"] -== Metering resources - -Metering has many resources which can be used to manage the deployment and installation of metering, as well as the reporting functionality metering provides. - -Metering is managed using the following custom resource definitions (CRDs): - -[cols="1,7"] -|=== - -|*MeteringConfig* |Configures the metering stack for deployment. Contains customizations and configuration options to control each component that makes up the metering stack. - -|*Report* |Controls what query to use, when, and how often the query should be run, and where to store the results. - -|*ReportQuery* |Contains the SQL queries used to perform analysis on the data contained within `ReportDataSource` resources. - -|*ReportDataSource* |Controls the data available to `ReportQuery` and `Report` resources. Allows configuring access to different databases for use within metering. - -|=== diff --git a/modules/metering-prometheus-connection.adoc b/modules/metering-prometheus-connection.adoc deleted file mode 100644 index b713fd4ff17d..000000000000 --- a/modules/metering-prometheus-connection.adoc +++ /dev/null @@ -1,55 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-reporting-operator.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-prometheus-connection_{context}"] -= Securing a Prometheus connection - -When you install metering on {product-title}, Prometheus is available at https://prometheus-k8s.openshift-monitoring.svc:9091/. - -To secure the connection to Prometheus, the default metering installation uses the {product-title} certificate authority (CA). If your Prometheus instance uses a different CA, you can inject the CA through a config map. You can also configure the Reporting Operator to use a specified bearer token to authenticate with Prometheus. - -.Procedure - -* Inject the CA that your Prometheus instance uses through a config map. For example: -+ -[source,yaml] ----- -spec: - reporting-operator: - spec: - config: - prometheus: - certificateAuthority: - useServiceAccountCA: false - configMap: - enabled: true - create: true - name: reporting-operator-certificate-authority-config - filename: "internal-ca.crt" - value: | - -----BEGIN CERTIFICATE----- - (snip) - -----END CERTIFICATE----- ----- -+ -Alternatively, to use the system certificate authorities for publicly valid certificates, set both `useServiceAccountCA` and `configMap.enabled` to `false`. - -* Specify a bearer token to authenticate with Prometheus. For example: - -[source,yaml] ----- -spec: - reporting-operator: - spec: - config: - prometheus: - metricsImporter: - auth: - useServiceAccountToken: false - tokenSecret: - enabled: true - create: true - value: "abc-123" ----- diff --git a/modules/metering-reports.adoc b/modules/metering-reports.adoc deleted file mode 100644 index e9cb4025d9e7..000000000000 --- a/modules/metering-reports.adoc +++ /dev/null @@ -1,381 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-about-reports.adoc -[id="metering-reports_{context}"] -= Reports - -The `Report` custom resource is used to manage the execution and status of reports. Metering produces reports derived from usage data sources, which can be used in further analysis and filtering. A single `Report` resource represents a job that manages a database table and updates it with new information according to a schedule. The report exposes the data in that table via the Reporting Operator HTTP API. - -Reports with a `spec.schedule` field set are always running and track what time periods it has collected data for. This ensures that if metering is shutdown or unavailable for an extended period of time, it backfills the data starting where it left off. If the schedule is unset, then the report runs once for the time specified by the `reportingStart` and `reportingEnd`. By default, reports wait for `ReportDataSource` resources to have fully imported any data covered in the reporting period. If the report has a schedule, it waits to run until the data in the period currently being processed has finished importing. - -[id="metering-example-report-with-schedule_{context}"] -== Example report with a schedule - -The following example `Report` object contains information on every pod's CPU requests, and runs every hour, adding the last hours worth of data each time it runs. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - reportingStart: "2021-07-01T00:00:00Z" - schedule: - period: "hourly" - hourly: - minute: 0 - second: 0 ----- - -[id="metering-example-report-without-schedule_{context}"] -== Example report without a schedule (run-once) - -The following example `Report` object contains information on every pod's CPU requests for all of July. After completion, it does not run again. - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - reportingStart: "2021-07-01T00:00:00Z" - reportingEnd: "2021-07-31T00:00:00Z" ----- - -[id="metering-query_{context}"] -== query - -The `query` field names the `ReportQuery` resource used to generate the report. The report query controls the schema of the report as well as how the results are processed. - -*`query` is a required field.* - -Use the following command to list available `ReportQuery` resources: - -[source,terminal] ----- -$ oc -n openshift-metering get reportqueries ----- - -.Example output -[source,terminal] ----- -NAME AGE -cluster-cpu-capacity 23m -cluster-cpu-capacity-raw 23m -cluster-cpu-usage 23m -cluster-cpu-usage-raw 23m -cluster-cpu-utilization 23m -cluster-memory-capacity 23m -cluster-memory-capacity-raw 23m -cluster-memory-usage 23m -cluster-memory-usage-raw 23m -cluster-memory-utilization 23m -cluster-persistentvolumeclaim-request 23m -namespace-cpu-request 23m -namespace-cpu-usage 23m -namespace-cpu-utilization 23m -namespace-memory-request 23m -namespace-memory-usage 23m -namespace-memory-utilization 23m -namespace-persistentvolumeclaim-request 23m -namespace-persistentvolumeclaim-usage 23m -node-cpu-allocatable 23m -node-cpu-allocatable-raw 23m -node-cpu-capacity 23m -node-cpu-capacity-raw 23m -node-cpu-utilization 23m -node-memory-allocatable 23m -node-memory-allocatable-raw 23m -node-memory-capacity 23m -node-memory-capacity-raw 23m -node-memory-utilization 23m -persistentvolumeclaim-capacity 23m -persistentvolumeclaim-capacity-raw 23m -persistentvolumeclaim-phase-raw 23m -persistentvolumeclaim-request 23m -persistentvolumeclaim-request-raw 23m -persistentvolumeclaim-usage 23m -persistentvolumeclaim-usage-raw 23m -persistentvolumeclaim-usage-with-phase-raw 23m -pod-cpu-request 23m -pod-cpu-request-raw 23m -pod-cpu-usage 23m -pod-cpu-usage-raw 23m -pod-memory-request 23m -pod-memory-request-raw 23m -pod-memory-usage 23m -pod-memory-usage-raw 23m ----- - -Report queries with the `-raw` suffix are used by other `ReportQuery` resources to build more complex queries, and should not be used directly for reports. - -`namespace-` prefixed queries aggregate pod CPU and memory requests by namespace, providing a list of namespaces and their overall usage based on resource requests. - -`pod-` prefixed queries are similar to `namespace-` prefixed queries but aggregate information by pod rather than namespace. These queries include the pod's namespace and node. - -`node-` prefixed queries return information about each node's total available resources. - -`aws-` prefixed queries are specific to AWS. Queries suffixed with `-aws` return the same data as queries of the same name without the suffix, and correlate usage with the EC2 billing data. - -The `aws-ec2-billing-data` report is used by other queries, and should not be used as a standalone report. The `aws-ec2-cluster-cost` report provides a total cost based on the nodes included in the cluster, and the sum of their costs for the time period being reported on. - -Use the following command to get the `ReportQuery` resource as YAML, and check the `spec.columns` field. For example, run: - -[source,terminal] ----- -$ oc -n openshift-metering get reportqueries namespace-memory-request -o yaml ----- - -.Example output -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: ReportQuery -metadata: - name: namespace-memory-request - labels: - operator-metering: "true" -spec: - columns: - - name: period_start - type: timestamp - unit: date - - name: period_end - type: timestamp - unit: date - - name: namespace - type: varchar - unit: kubernetes_namespace - - name: pod_request_memory_byte_seconds - type: double - unit: byte_seconds ----- - -[id="metering-schedule_{context}"] -== schedule - -The `spec.schedule` configuration block defines when the report runs. The main fields in the `schedule` section are `period`, and then depending on the value of `period`, the fields `hourly`, `daily`, `weekly`, and `monthly` allow you to fine-tune when the report runs. - -For example, if `period` is set to `weekly`, you can add a `weekly` field to the `spec.schedule` block. The following example will run once a week on Wednesday, at 1 PM (hour 13 in the day). - -[source,yaml] ----- -... - schedule: - period: "weekly" - weekly: - dayOfWeek: "wednesday" - hour: 13 -... ----- - -[id="metering-period_{context}"] -=== period - -Valid values of `schedule.period` are listed below, and the options available to set for a given period are also listed. - -* `hourly` -** `minute` -** `second` -* `daily` -** `hour` -** `minute` -** `second` -* `weekly` -** `dayOfWeek` -** `hour` -** `minute` -** `second` -* `monthly` -** `dayOfMonth` -** `hour` -** `minute` -** `second` -* `cron` -** `expression` - -Generally, the `hour`, `minute`, `second` fields control when in the day the report runs, and `dayOfWeek`/`dayOfMonth` control what day of the week, or day of month the report runs on, if it is a weekly or monthly report period. - -For each of these fields, there is a range of valid values: - -* `hour` is an integer value between 0-23. -* `minute` is an integer value between 0-59. -* `second` is an integer value between 0-59. -* `dayOfWeek` is a string value that expects the day of the week (spelled out). -* `dayOfMonth` is an integer value between 1-31. - -For cron periods, normal cron expressions are valid: - -* `expression: "*/5 * * * *"` - -[id="metering-reportingStart_{context}"] -== reportingStart - -To support running a report against existing data, you can set the `spec.reportingStart` field to a link:https://tools.ietf.org/html/rfc3339#section-5.8[RFC3339 timestamp] to tell the report to run according to its `schedule` starting from `reportingStart` rather than the current time. - -[NOTE] -==== -Setting the `spec.reportingStart` field to a specific time will result in the Reporting Operator running many queries in succession for each interval in the schedule that is between the `reportingStart` time and the current time. This could be thousands of queries if the period is less than daily and the `reportingStart` is more than a few months back. If `reportingStart` is left unset, the report will run at the next full `reportingPeriod` after the time the report is created. -==== - -As an example of how to use this field, if you had data already collected dating back to January 1st, 2019 that you want to include in your `Report` object, you can create a report with the following values: - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - schedule: - period: "hourly" - reportingStart: "2021-01-01T00:00:00Z" ----- - -[id="metering-reportingEnd_{context}"] -== reportingEnd - -To configure a report to only run until a specified time, you can set the `spec.reportingEnd` field to an link:https://tools.ietf.org/html/rfc3339#section-5.8[RFC3339 timestamp]. The value of this field will cause the report to stop running on its schedule after it has finished generating reporting data for the period covered from its start time until `reportingEnd`. - -Because a schedule will most likely not align with the `reportingEnd`, the last period in the schedule will be shortened to end at the specified `reportingEnd` time. If left unset, then the report will run forever, or until a `reportingEnd` is set on the report. - -For example, if you want to create a report that runs once a week for the month of July: - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - schedule: - period: "weekly" - reportingStart: "2021-07-01T00:00:00Z" - reportingEnd: "2021-07-31T00:00:00Z" ----- - -[id="metering-expiration_{context}"] -== expiration - -Add the `expiration` field to set a retention period on a scheduled metering report. You can avoid manually removing the report by setting the `expiration` duration value. The retention period is equal to the `Report` object `creationDate` plus the `expiration` duration. The report is removed from the cluster at the end of the retention period if no other reports or report queries depend on the expiring report. Deleting the report from the cluster can take several minutes. - -[NOTE] -==== -Setting the `expiration` field is not recommended for roll-up or aggregated reports. If a report is depended upon by other reports or report queries, then the report is not removed at the end of the retention period. You can view the `report-operator` logs at debug level for the timing output around a report retention decision. -==== - -For example, the following scheduled report is deleted 30 minutes after the `creationDate` of the report: - -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: pod-cpu-request-hourly -spec: - query: "pod-cpu-request" - schedule: - period: "weekly" - reportingStart: "2021-07-01T00:00:00Z" - expiration: "30m" <1> ----- -<1> Valid time units for the `expiration` duration are `ns`, `us` (or `µs`), `ms`, `s`, `m`, and `h`. - -[NOTE] -==== -The `expiration` retention period for a `Report` object is not precise and works on the order of several minutes, not nanoseconds. -==== - -[id="metering-runImmediately_{context}"] -== runImmediately - -When `runImmediately` is set to `true`, the report runs immediately. This behavior ensures that the report is immediately processed and queued without requiring additional scheduling parameters. - -[NOTE] -==== -When `runImmediately` is set to `true`, you must set a `reportingEnd` and `reportingStart` value. -==== - -[id="metering-inputs_{context}"] -== inputs - -The `spec.inputs` field of a `Report` object can be used to override or set values defined in a `ReportQuery` resource's `spec.inputs` field. - -`spec.inputs` is a list of name-value pairs: - -[source,yaml] ----- -spec: - inputs: - - name: "NamespaceCPUUsageReportName" <1> - value: "namespace-cpu-usage-hourly" <2> ----- - -<1> The `name` of an input must exist in the ReportQuery's `inputs` list. -<2> The `value` of the input must be the correct type for the input's `type`. - -// TODO(chance): include modules/metering-reportquery-inputs.adoc module - -[id="metering-roll-up-reports_{context}"] -== Roll-up reports - -Report data is stored in the database much like metrics themselves, and therefore, can be used in aggregated or roll-up reports. A simple use case for a roll-up report is to spread the time required to produce a report over a longer period of time. This is instead of requiring a monthly report to query and add all data over an entire month. For example, the task can be split into daily reports that each run over 1/30 of the data. - -A custom roll-up report requires a custom report query. The `ReportQuery` resource template processor provides a `reportTableName` function that can get the necessary table name from a `Report` object's `metadata.name`. - -Below is a snippet taken from a built-in query: - -.pod-cpu.yaml -[source,yaml] ----- -spec: -... - inputs: - - name: ReportingStart - type: time - - name: ReportingEnd - type: time - - name: NamespaceCPUUsageReportName - type: Report - - name: PodCpuUsageRawDataSourceName - type: ReportDataSource - default: pod-cpu-usage-raw -... - - query: | -... - {|- if .Report.Inputs.NamespaceCPUUsageReportName |} - namespace, - sum(pod_usage_cpu_core_seconds) as pod_usage_cpu_core_seconds - FROM {| .Report.Inputs.NamespaceCPUUsageReportName | reportTableName |} -... ----- - -.Example `aggregated-report.yaml` roll-up report -[source,yaml] ----- -spec: - query: "namespace-cpu-usage" - inputs: - - name: "NamespaceCPUUsageReportName" - value: "namespace-cpu-usage-hourly" ----- - -// TODO(chance): replace the comment below with an include on the modules/metering-rollup-report.adoc -// For more information on setting up a roll-up report, see the [roll-up report guide](rollup-reports.md). - -[id="metering-report-status_{context}"] -=== Report status - -The execution of a scheduled report can be tracked using its status field. Any errors occurring during the preparation of a report will be recorded here. - -The `status` field of a `Report` object currently has two fields: - -* `conditions`: Conditions is a list of conditions, each of which have a `type`, `status`, `reason`, and `message` field. Possible values of a condition's `type` field are `Running` and `Failure`, indicating the current state of the scheduled report. The `reason` indicates why its `condition` is in its current state with the `status` being either `true`, `false` or, `unknown`. The `message` provides a human readable indicating why the condition is in the current state. For detailed information on the `reason` values, see link:https://github.com/operator-framework/operator-metering/blob/master/pkg/apis/metering/v1/util/report_util.go#L10[`pkg/apis/metering/v1/util/report_util.go`]. -* `lastReportTime`: Indicates the time metering has collected data up to. diff --git a/modules/metering-store-data-in-azure.adoc b/modules/metering-store-data-in-azure.adoc deleted file mode 100644 index a193836d22ba..000000000000 --- a/modules/metering-store-data-in-azure.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-azure_{context}"] -= Storing data in Microsoft Azure - -To store data in Azure blob storage, you must use an existing container. - -.Procedure - -. Edit the `spec.storage` section in the `azure-blob-storage.yaml` file: -+ -.Example `azure-blob-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "azure" - azure: - container: "bucket1" <1> - secretName: "my-azure-secret" <2> - rootDirectory: "/testDir" <3> ----- -<1> Specify the container name. -<2> Specify a secret in the metering namespace. See the example `Secret` object below for more details. -<3> Optional: Specify the directory where you would like to store your data. - -. Use the following `Secret` object as a template: -+ -.Example Azure `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-azure-secret -data: - azure-storage-account-name: "dGVzdAo=" - azure-secret-access-key: "c2VjcmV0Cg==" ----- - -. Create the secret: -+ -[source,terminal] ----- -$ oc create secret -n openshift-metering generic my-azure-secret \ - --from-literal=azure-storage-account-name=my-storage-account-name \ - --from-literal=azure-secret-access-key=my-secret-key ----- diff --git a/modules/metering-store-data-in-gcp.adoc b/modules/metering-store-data-in-gcp.adoc deleted file mode 100644 index 8a39f891ab18..000000000000 --- a/modules/metering-store-data-in-gcp.adoc +++ /dev/null @@ -1,53 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-gcp_{context}"] -= Storing data in Google Cloud Storage - -To store your data in Google Cloud Storage, you must use an existing bucket. - -.Procedure - -. Edit the `spec.storage` section in the `gcs-storage.yaml` file: -+ -.Example `gcs-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "gcs" - gcs: - bucket: "metering-gcs/test1" <1> - secretName: "my-gcs-secret" <2> ----- -<1> Specify the name of the bucket. You can optionally specify the directory within the bucket where you would like to store your data. -<2> Specify a secret in the metering namespace. See the example `Secret` object below for more details. - -. Use the following `Secret` object as a template: -+ -.Example Google Cloud Storage `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-gcs-secret -data: - gcs-service-account.json: "c2VjcmV0Cg==" ----- - -. Create the secret: -+ -[source,terminal] ----- -$ oc create secret -n openshift-metering generic my-gcs-secret \ - --from-file gcs-service-account.json=/path/to/my/service-account-key.json ----- diff --git a/modules/metering-store-data-in-s3-compatible.adoc b/modules/metering-store-data-in-s3-compatible.adoc deleted file mode 100644 index 1484c0281d36..000000000000 --- a/modules/metering-store-data-in-s3-compatible.adoc +++ /dev/null @@ -1,48 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-s3-compatible_{context}"] -= Storing data in S3-compatible storage - -You can use S3-compatible storage such as Noobaa. - -.Procedure - -. Edit the `spec.storage` section in the `s3-compatible-storage.yaml` file: -+ -.Example `s3-compatible-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "s3Compatible" - s3Compatible: - bucket: "bucketname" <1> - endpoint: "http://example:port-number" <2> - secretName: "my-aws-secret" <3> ----- -<1> Specify the name of your S3-compatible bucket. -<2> Specify the endpoint for your storage. -<3> The name of a secret in the metering namespace containing the AWS credentials in the `data.aws-access-key-id` and `data.aws-secret-access-key` fields. See the example `Secret` object below for more details. - -. Use the following `Secret` object as a template: -+ -.Example S3-compatible `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-aws-secret -data: - aws-access-key-id: "dGVzdAo=" - aws-secret-access-key: "c2VjcmV0Cg==" ----- diff --git a/modules/metering-store-data-in-s3.adoc b/modules/metering-store-data-in-s3.adoc deleted file mode 100644 index 41199e170c37..000000000000 --- a/modules/metering-store-data-in-s3.adoc +++ /dev/null @@ -1,136 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-s3_{context}"] -= Storing data in Amazon S3 - -Metering can use an existing Amazon S3 bucket or create a bucket for storage. - -[NOTE] -==== -Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data. -==== - -.Procedure - -. Edit the `spec.storage` section in the `s3-storage.yaml` file: -+ -.Example `s3-storage.yaml` file -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "s3" - s3: - bucket: "bucketname/path/" <1> - region: "us-west-1" <2> - secretName: "my-aws-secret" <3> - # Set to false if you want to provide an existing bucket, instead of - # having metering create the bucket on your behalf. - createBucket: true <4> ----- -<1> Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket. -<2> Specify the region of your bucket. -<3> The name of a secret in the metering namespace containing the AWS credentials in the `data.aws-access-key-id` and `data.aws-secret-access-key` fields. See the example `Secret` object below for more details. -<4> Set this field to `false` if you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that have `CreateBucket` permissions. - -. Use the following `Secret` object as a template: -+ -.Example AWS `Secret` object -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: my-aws-secret -data: - aws-access-key-id: "dGVzdAo=" - aws-secret-access-key: "c2VjcmV0Cg==" ----- -+ -[NOTE] -==== -The values of the `aws-access-key-id` and `aws-secret-access-key` must be base64 encoded. -==== - -. Create the secret: -+ -[source,terminal] ----- -$ oc create secret -n openshift-metering generic my-aws-secret \ - --from-literal=aws-access-key-id=my-access-key \ - --from-literal=aws-secret-access-key=my-secret-key ----- -+ -[NOTE] -==== -This command automatically base64 encodes your `aws-access-key-id` and `aws-secret-access-key` values. -==== - -The `aws-access-key-id` and `aws-secret-access-key` credentials must have read and write access to the bucket. The following `aws/read-write.json` file shows an IAM policy that grants the required permissions: - -.Example `aws/read-write.json` file -[source,json] ----- -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "1", - "Effect": "Allow", - "Action": [ - "s3:AbortMultipartUpload", - "s3:DeleteObject", - "s3:GetObject", - "s3:HeadBucket", - "s3:ListBucket", - "s3:ListMultipartUploadParts", - "s3:PutObject" - ], - "Resource": [ - "arn:aws:s3:::operator-metering-data/*", - "arn:aws:s3:::operator-metering-data" - ] - } - ] -} ----- - -If `spec.storage.hive.s3.createBucket` is set to `true` or unset in your `s3-storage.yaml` file, then you should use the `aws/read-write-create.json` file that contains permissions for creating and deleting buckets: - -.Example `aws/read-write-create.json` file -[source,json] ----- -{ - "Version": "2012-10-17", - "Statement": [ - { - "Sid": "1", - "Effect": "Allow", - "Action": [ - "s3:AbortMultipartUpload", - "s3:DeleteObject", - "s3:GetObject", - "s3:HeadBucket", - "s3:ListBucket", - "s3:CreateBucket", - "s3:DeleteBucket", - "s3:ListMultipartUploadParts", - "s3:PutObject" - ], - "Resource": [ - "arn:aws:s3:::operator-metering-data/*", - "arn:aws:s3:::operator-metering-data" - ] - } - ] -} ----- diff --git a/modules/metering-store-data-in-shared-volumes.adoc b/modules/metering-store-data-in-shared-volumes.adoc deleted file mode 100644 index a3a73285a5fe..000000000000 --- a/modules/metering-store-data-in-shared-volumes.adoc +++ /dev/null @@ -1,150 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-persistent-storage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-store-data-in-shared-volumes_{context}"] -= Storing data in shared volumes - -Metering does not configure storage by default. However, you can use any ReadWriteMany persistent volume (PV) or any storage class that provisions a ReadWriteMany PV for metering storage. - -[NOTE] -==== -NFS is not recommended to use in production. Using an NFS server on RHEL as a storage back end can fail to meet metering requirements and to provide the performance that is needed for the Metering Operator to work appropriately. - -Other NFS implementations on the marketplace might not have these issues, such as a Parallel Network File System (pNFS). pNFS is an NFS implementation with distributed and parallel capability. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against {product-title} core components. -==== - -.Procedure - -. Modify the `shared-storage.yaml` file to use a ReadWriteMany persistent volume for storage: -+ -.Example `shared-storage.yaml` file --- -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: MeteringConfig -metadata: - name: "operator-metering" -spec: - storage: - type: "hive" - hive: - type: "sharedPVC" - sharedPVC: - claimName: "metering-nfs" <1> - # Uncomment the lines below to provision a new PVC using the specified storageClass. <2> - # createPVC: true - # storageClass: "my-nfs-storage-class" - # size: 5Gi ----- - -Select one of the configuration options below: - -<1> Set `storage.hive.sharedPVC.claimName` to the name of an existing ReadWriteMany persistent volume claim (PVC). This configuration is necessary if you do not have dynamic volume provisioning or want to have more control over how the persistent volume is created. - -<2> Set `storage.hive.sharedPVC.createPVC` to `true` and set the `storage.hive.sharedPVC.storageClass` to the name of a storage class with ReadWriteMany access mode. This configuration uses dynamic volume provisioning to create a volume automatically. --- - -. Create the following resource objects that are required to deploy an NFS server for metering. Use the `oc create -f .yaml` command to create the object YAML files. - -.. Configure a `PersistentVolume` resource object: -+ -.Example `nfs_persistentvolume.yaml` file -[source,yaml] ----- -apiVersion: v1 -kind: PersistentVolume -metadata: - name: nfs - labels: - role: nfs-server -spec: - capacity: - storage: 5Gi - accessModes: - - ReadWriteMany - storageClassName: nfs-server <1> - nfs: - path: "/" - server: REPLACEME - persistentVolumeReclaimPolicy: Delete ----- -<1> Must exactly match the `[kind: StorageClass].metadata.name` field value. - -.. Configure a `Pod` resource object with the `nfs-server` role: -+ -.Example `nfs_server.yaml` file -[source,yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: nfs-server - labels: - role: nfs-server -spec: - containers: - - name: nfs-server - image: <1> - imagePullPolicy: IfNotPresent - ports: - - name: nfs - containerPort: 2049 - securityContext: - privileged: true - volumeMounts: - - mountPath: "/mnt/data" - name: local - volumes: - - name: local - emptyDir: {} ----- -<1> Install your NFS server image. - -.. Configure a `Service` resource object with the `nfs-server` role: -+ -.Example `nfs_service.yaml` file -[source,yaml] ----- -apiVersion: v1 -kind: Service -metadata: - name: nfs-service - labels: - role: nfs-server -spec: - ports: - - name: 2049-tcp - port: 2049 - protocol: TCP - targetPort: 2049 - selector: - role: nfs-server - sessionAffinity: None - type: ClusterIP ----- - -.. Configure a `StorageClass` resource object: -+ -.Example `nfs_storageclass.yaml` file -[source,yaml] ----- -apiVersion: storage.k8s.io/v1 -kind: StorageClass -metadata: - name: nfs-server <1> -provisioner: example.com/nfs -parameters: - archiveOnDelete: "false" -reclaimPolicy: Delete -volumeBindingMode: Immediate ----- -<1> Must exactly match the `[kind: PersistentVolume].spec.storageClassName` field value. - - -[WARNING] -==== -Configuration of your NFS storage, and any relevant resource objects, will vary depending on the NFS server image that you use for metering storage. -==== diff --git a/modules/metering-troubleshooting.adoc b/modules/metering-troubleshooting.adoc deleted file mode 100644 index e0a857ced20f..000000000000 --- a/modules/metering-troubleshooting.adoc +++ /dev/null @@ -1,195 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-troubleshooting-debugging.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-troubleshooting_{context}"] -= Troubleshooting metering - -A common issue with metering is pods failing to start. Pods might fail to start due to lack of resources or if they have a dependency on a resource that does not exist, such as a `StorageClass` or `Secret` resource. - -[id="metering-not-enough-compute-resources_{context}"] -== Not enough compute resources - -A common issue when installing or running metering is a lack of compute resources. As the cluster grows and more reports are created, the Reporting Operator pod requires more memory. If memory usage reaches the pod limit, the cluster considers the pod out of memory (OOM) and terminates it with an `OOMKilled` status. Ensure that metering is allocated the minimum resource requirements described in the installation prerequisites. - -[NOTE] -==== -The Metering Operator does not autoscale the Reporting Operator based on the load in the cluster. Therefore, CPU usage for the Reporting Operator pod does not increase as the cluster grows. -==== - -To determine if the issue is with resources or scheduling, follow the troubleshooting instructions included in the Kubernetes document https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/#troubleshooting[Managing Compute Resources for Containers]. - -To troubleshoot issues due to a lack of compute resources, check the following within the `openshift-metering` namespace. - -.Prerequisites - -* You are currently in the `openshift-metering` namespace. Change to the `openshift-metering` namespace by running: -+ -[source,terminal] ----- -$ oc project openshift-metering ----- - -.Procedure - -. Check for metering `Report` resources that fail to complete and show the status of `ReportingPeriodUnmetDependencies`: -+ -[source,terminal] ----- -$ oc get reports ----- -+ -.Example output -[source,terminal] ----- -NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE -namespace-cpu-utilization-adhoc-10 namespace-cpu-utilization Finished 2020-10-31T00:00:00Z 2m38s -namespace-cpu-utilization-adhoc-11 namespace-cpu-utilization ReportingPeriodUnmetDependencies 2m23s -namespace-memory-utilization-202010 namespace-memory-utilization ReportingPeriodUnmetDependencies 26s -namespace-memory-utilization-202011 namespace-memory-utilization ReportingPeriodUnmetDependencies 14s ----- - -. Check the `ReportDataSource` resources where the `NEWEST METRIC` is less than the report end date: -+ -[source,terminal] ----- -$ oc get reportdatasource ----- -+ -.Example output -[source,terminal] ----- -NAME EARLIEST METRIC NEWEST METRIC IMPORT START IMPORT END LAST IMPORT TIME AGE -... -node-allocatable-cpu-cores 2020-04-23T09:14:00Z 2020-08-31T10:07:00Z 2020-04-23T09:14:00Z 2020-10-15T17:13:00Z 2020-12-09T12:45:10Z 230d -node-allocatable-memory-bytes 2020-04-23T09:14:00Z 2020-08-30T05:19:00Z 2020-04-23T09:14:00Z 2020-10-14T08:01:00Z 2020-12-09T12:45:12Z 230d -... -pod-usage-memory-bytes 2020-04-23T09:14:00Z 2020-08-24T20:25:00Z 2020-04-23T09:14:00Z 2020-10-09T23:31:00Z 2020-12-09T12:45:12Z 230d ----- - -. Check the health of the `reporting-operator` `Pod` resource for a high number of pod restarts: -+ -[source,terminal] ----- -$ oc get pods -l app=reporting-operator ----- -+ -.Example output -[source,terminal] ----- -NAME READY STATUS RESTARTS AGE -reporting-operator-84f7c9b7b6-fr697 2/2 Running 542 8d <1> ----- -<1> The Reporting Operator pod is restarting at a high rate. - -. Check the `reporting-operator` `Pod` resource for an `OOMKilled` termination: -+ -[source,terminal] ----- -$ oc describe pod/reporting-operator-84f7c9b7b6-fr697 ----- -+ -.Example output -[source,terminal] ----- -Name: reporting-operator-84f7c9b7b6-fr697 -Namespace: openshift-metering -Priority: 0 -Node: ip-10-xx-xx-xx.ap-southeast-1.compute.internal/10.xx.xx.xx -... - Ports: 8080/TCP, 6060/TCP, 8082/TCP - Host Ports: 0/TCP, 0/TCP, 0/TCP - State: Running - Started: Thu, 03 Dec 2020 20:59:45 +1000 - Last State: Terminated - Reason: OOMKilled <1> - Exit Code: 137 - Started: Thu, 03 Dec 2020 20:38:05 +1000 - Finished: Thu, 03 Dec 2020 20:59:43 +1000 ----- -<1> The Reporting Operator pod was terminated due to OOM kill. - - -[id="metering-check-and-increase-memory-limits_{context}"] -=== Increasing the reporting-operator pod memory limit - -If you are experiencing an increase in pod restarts and OOM kill events, you can check the current memory limit set for the Reporting Operator pod. Increasing the memory limit allows the Reporting Operator pod to update the report data sources. If necessary, increase the memory limit in your `MeteringConfig` resource by 25% - 50%. - -.Procedure - -. Check the current memory limits of the `reporting-operator` `Pod` resource: -+ -[source,terminal] ----- -$ oc describe pod reporting-operator-67d6f57c56-79mrt ----- -+ -.Example output -[source,terminal] ----- -Name: reporting-operator-67d6f57c56-79mrt -Namespace: openshift-metering -Priority: 0 -... - Ports: 8080/TCP, 6060/TCP, 8082/TCP - Host Ports: 0/TCP, 0/TCP, 0/TCP - State: Running - Started: Tue, 08 Dec 2020 14:26:21 +1000 - Ready: True - Restart Count: 0 - Limits: - cpu: 1 - memory: 500Mi <1> - Requests: - cpu: 500m - memory: 250Mi - Environment: -... ----- -<1> The current memory limit for the Reporting Operator pod. - -. Edit the `MeteringConfig` resource to update the memory limit: -+ -[source,terminal] ----- -$ oc edit meteringconfig/operator-metering ----- -+ -.Example `MeteringConfig` resource -[source,yaml] ----- -kind: MeteringConfig -metadata: - name: operator-metering - namespace: openshift-metering -spec: - reporting-operator: - spec: - resources: <1> - limits: - cpu: 1 - memory: 750Mi - requests: - cpu: 500m - memory: 500Mi -... ----- -<1> Add or increase memory limits within the `resources` field of the `MeteringConfig` resource. -+ -[NOTE] -==== -If there continue to be numerous OOM killed events after memory limits are increased, this might indicate that a different issue is causing the reports to be in a pending state. -==== - -[id="metering-storageclass-not-configured_{context}"] -== StorageClass resource not configured - -Metering requires that a default `StorageClass` resource be configured for dynamic provisioning. - -See the documentation on configuring metering for information on how to check if there are any `StorageClass` resources configured for the cluster, how to set the default, and how to configure metering to use a storage class other than the default. - -[id="metering-secret-not-configured-correctly_{context}"] -== Secret not configured correctly - -A common issue with metering is providing the incorrect secret when configuring your persistent storage. Be sure to review the example configuration files and create you secret according to the guidelines for your storage provider. diff --git a/modules/metering-uninstall-crds.adoc b/modules/metering-uninstall-crds.adoc deleted file mode 100644 index 66bd61ec3ecc..000000000000 --- a/modules/metering-uninstall-crds.adoc +++ /dev/null @@ -1,28 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-uninstall.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-uninstall-crds_{context}"] -= Uninstalling metering custom resource definitions - -The metering custom resource definitions (CRDs) remain in the cluster after the Metering Operator is uninstalled and the `openshift-metering` namespace is deleted. - -[IMPORTANT] -==== -Deleting the metering CRDs disrupts any additional metering installations in other namespaces in your cluster. Ensure that there are no other metering installations before proceeding. -==== - -.Prerequisites - -* The `MeteringConfig` custom resource in the `openshift-metering` namespace is deleted. -* The `openshift-metering` namespace is deleted. - -.Procedure - -* Delete the remaining metering CRDs: -+ -[source,terminal] ----- -$ oc get crd -o name | grep "metering.openshift.io" | xargs oc delete ----- diff --git a/modules/metering-uninstall.adoc b/modules/metering-uninstall.adoc deleted file mode 100644 index 4cfedd8bd188..000000000000 --- a/modules/metering-uninstall.adoc +++ /dev/null @@ -1,36 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-uninstall.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-uninstall_{context}"] -= Uninstalling a metering namespace - -Uninstall your metering namespace, for example the `openshift-metering` namespace, by removing the `MeteringConfig` resource and deleting the `openshift-metering` namespace. - -.Prerequisites - -* The Metering Operator is removed from your cluster. - -.Procedure - -. Remove all resources created by the Metering Operator: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering delete meteringconfig --all ----- - -. After the previous step is complete, verify that all pods in the `openshift-metering` namespace are deleted or are reporting a terminating state: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering get pods ----- - -. Delete the `openshift-metering` namespace: -+ -[source,terminal] ----- -$ oc delete namespace openshift-metering ----- diff --git a/modules/metering-use-mysql-or-postgresql-for-hive.adoc b/modules/metering-use-mysql-or-postgresql-for-hive.adoc deleted file mode 100644 index 38ebb49072ec..000000000000 --- a/modules/metering-use-mysql-or-postgresql-for-hive.adoc +++ /dev/null @@ -1,89 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/configuring_metering/metering-configure-hive-metastore.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-use-mysql-or-postgresql-for-hive_{context}"] -= Using MySQL or PostgreSQL for the Hive metastore - -The default installation of metering configures Hive to use an embedded Java database called Derby. This is unsuited for larger environments and can be replaced with either a MySQL or PostgreSQL database. Use the following example configuration files if your deployment requires a MySQL or PostgreSQL database for Hive. - -There are three configuration options you can use to control the database that is used by Hive metastore: `url`, `driver`, and `secretName`. - -Create your MySQL or Postgres instance with a user name and password. Then create a secret by using the OpenShift CLI (`oc`) or a YAML file. The `secretName` you create for this secret must map to the `spec.hive.spec.config.db.secretName` field in the `MeteringConfig` object resource. - -.Procedure - -. Create a secret using the OpenShift CLI (`oc`) or by using a YAML file: -+ -* Create a secret by using the following command: -+ -[source,terminal] ----- -$ oc --namespace openshift-metering create secret generic --from-literal=username= --from-literal=password= ----- -+ -* Create a secret by using a YAML file. For example: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Secret -metadata: - name: <1> -data: - username: <2> - password: <3> ----- -<1> The name of the secret. -<2> Base64 encoded database user name. -<3> Base64 encoded database password. - -. Create a configuration file to use a MySQL or PostgreSQL database for Hive: -+ -* To use a MySQL database for Hive, use the example configuration file below. Metering supports configuring the internal Hive metastore to use the MySQL server versions 5.6, 5.7, and 8.0. -+ --- -[source,yaml] ----- -spec: - hive: - spec: - metastore: - storage: - create: false - config: - db: - url: "jdbc:mysql://mysql.example.com:3306/hive_metastore" <1> - driver: "com.mysql.cj.jdbc.Driver" - secretName: "REPLACEME" <2> ----- -[NOTE] -==== -When configuring Metering to work with older MySQL server versions, such as 5.6 or 5.7, you might need to add the link:https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-usagenotes-known-issues-limitations.html[`enabledTLSProtocols` JDBC URL parameter] when configuring the internal Hive metastore. -==== -<1> To use the TLS v1.2 cipher suite, set `url` to `"jdbc:mysql://:/?enabledTLSProtocols=TLSv1.2"`. -<2> The name of the secret containing the base64-encrypted user name and password database credentials. --- -+ -You can pass additional JDBC parameters using the `spec.hive.config.url`. For more details, see the link:https://dev.mysql.com/doc/connector-j/8.0/en/connector-j-reference-configuration-properties.html[MySQL Connector/J 8.0 documentation]. -+ -* To use a PostgreSQL database for Hive, use the example configuration file below: -+ -[source,yaml] ----- -spec: - hive: - spec: - metastore: - storage: - create: false - config: - db: - url: "jdbc:postgresql://postgresql.example.com:5432/hive_metastore" - driver: "org.postgresql.Driver" - username: "" - password: "" ----- -+ -You can pass additional JDBC parameters using the `spec.hive.config.url`. For more details, see the link:https://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters[PostgreSQL JDBC driver documentation]. diff --git a/modules/metering-viewing-report-results.adoc b/modules/metering-viewing-report-results.adoc deleted file mode 100644 index 49d22041a902..000000000000 --- a/modules/metering-viewing-report-results.adoc +++ /dev/null @@ -1,103 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-using-metering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-viewing-report-results_{context}"] -= Viewing report results - -Viewing a report's results involves querying the reporting API route and authenticating to the API using your {product-title} credentials. -Reports can be retrieved as `JSON`, `CSV`, or `Tabular` formats. - -.Prerequisites - -* Metering is installed. -* To access report results, you must either be a cluster administrator, or you need to be granted access using the `report-exporter` role in the `openshift-metering` namespace. - -.Procedure - -. Change to the `openshift-metering` project: -+ -[source,terminal] ----- -$ oc project openshift-metering ----- - -. Query the reporting API for results: - -.. Create a variable for the metering `reporting-api` route then get the route: -+ -[source,terminal] ----- -$ meteringRoute="$(oc get routes metering -o jsonpath='{.spec.host}')" ----- -+ -[source,terminal] ----- -$ echo "$meteringRoute" ----- - -.. Get the token of your current user to be used in the request: -+ -[source,terminal] ----- -$ token="$(oc whoami -t)" ----- - -.. Set `reportName` to the name of the report you created: -+ -[source,terminal] ----- -$ reportName=namespace-cpu-request-2020 ----- - -.. Set `reportFormat` to one of `csv`, `json`, or `tabular` to specify the output format of the API response: -+ -[source,terminal] ----- -$ reportFormat=csv ----- - -.. To get the results, use `curl` to make a request to the reporting API for your report: -+ -[source,terminal] ----- -$ curl --insecure -H "Authorization: Bearer ${token}" "https://${meteringRoute}/api/v1/reports/get?name=${reportName}&namespace=openshift-metering&format=$reportFormat" ----- -+ -.Example output with `reportName=namespace-cpu-request-2020` and `reportFormat=csv` -[source,terminal] ----- -period_start,period_end,namespace,pod_request_cpu_core_seconds -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-apiserver,11745.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-apiserver-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-authentication,522.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-authentication-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cloud-credential-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-machine-approver,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-node-tuning-operator,3385.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-samples-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-cluster-version,522.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-console,522.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-console-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-controller-manager,7830.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-controller-manager-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-dns,34372.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-dns-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-etcd,23490.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-image-registry,5993.400000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ingress,5220.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-ingress-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver,12528.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-apiserver-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager,8613.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-kube-controller-manager-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-machine-api,1305.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-machine-config-operator,9637.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-metering,19575.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-monitoring,6256.800000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-network-operator,261.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-sdn,94503.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-service-ca,783.000000 -2020-01-01 00:00:00 +0000 UTC,2020-12-30 23:59:59 +0000 UTC,openshift-service-ca-operator,261.000000 ----- diff --git a/modules/metering-writing-reports.adoc b/modules/metering-writing-reports.adoc deleted file mode 100644 index 4f4538f1046d..000000000000 --- a/modules/metering-writing-reports.adoc +++ /dev/null @@ -1,73 +0,0 @@ -// Module included in the following assemblies: -// -// * metering/metering-using-metering.adoc - -:_mod-docs-content-type: PROCEDURE -[id="metering-writing-reports_{context}"] -= Writing Reports - -Writing a report is the way to process and analyze data using metering. - -To write a report, you must define a `Report` resource in a YAML file, specify the required parameters, and create it in the `openshift-metering` namespace. - -.Prerequisites - -* Metering is installed. - -.Procedure - -. Change to the `openshift-metering` project: -+ -[source,terminal] ----- -$ oc project openshift-metering ----- - -. Create a `Report` resource as a YAML file: -+ -.. Create a YAML file with the following content: -+ -[source,yaml] ----- -apiVersion: metering.openshift.io/v1 -kind: Report -metadata: - name: namespace-cpu-request-2020 <2> - namespace: openshift-metering -spec: - reportingStart: '2020-01-01T00:00:00Z' - reportingEnd: '2020-12-30T23:59:59Z' - query: namespace-cpu-request <1> - runImmediately: true <3> ----- -<1> The `query` specifies the `ReportQuery` resources used to generate the report. Change this based on what you want to report on. For a list of options, run `oc get reportqueries | grep -v raw`. -<2> Use a descriptive name about what the report does for `metadata.name`. A good name describes the query, and the schedule or period you used. -<3> Set `runImmediately` to `true` for it to run with whatever data is available, or set it to `false` if you want it to wait for `reportingEnd` to pass. - -.. Run the following command to create the `Report` resource: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- -+ -.Example output -[source,terminal] ----- -report.metering.openshift.io/namespace-cpu-request-2020 created ----- -+ - -. You can list reports and their `Running` status with the following command: -+ -[source,terminal] ----- -$ oc get reports ----- -+ -.Example output -[source,terminal] ----- -NAME QUERY SCHEDULE RUNNING FAILED LAST REPORT TIME AGE -namespace-cpu-request-2020 namespace-cpu-request Finished 2020-12-30T23:59:59Z 26s ----- diff --git a/modules/mod-docs-ocp-conventions.adoc b/modules/mod-docs-ocp-conventions.adoc deleted file mode 100644 index 37624cfe1ad7..000000000000 --- a/modules/mod-docs-ocp-conventions.adoc +++ /dev/null @@ -1,154 +0,0 @@ -// Module included in the following assemblies: -// -// * mod_docs_guide/mod-docs-conventions-ocp.adoc - -// Base the file name and the ID on the module title. For example: -// * file name: my-reference-a.adoc -// * ID: [id="my-reference-a"] -// * Title: = My reference A - -[id="mod-docs-ocp-conventions_{context}"] -= Modular docs OpenShift conventions - -These Modular Docs conventions for OpenShift docs build on top of the CCS -modular docs guidelines. - -These guidelines and conventions should be read along with the: - -* General CCS -link:https://redhat-documentation.github.io/modular-docs/[modular docs guidelines]. -* link:https://redhat-documentation.github.io/asciidoc-markup-conventions/[AsciiDoc markup conventions] -* link:https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/contributing.adoc[OpenShift Contribution Guide] -* link:https://github.com/openshift/openshift-docs/blob/master/contributing_to_docs/doc_guidelines.adoc[OpenShift Documentation Guidelines] - -IMPORTANT: If some convention is duplicated, the convention in this guide -supersedes all others. - -[id="ocp-ccs-conventions_{context}"] -== OpenShift CCS conventions - -* All assemblies must define a context that is unique. -+ -Add this context at the top of the page, just before the first anchor id. -+ -Example: -+ ----- -:context: assembly-gsg ----- - -* All assemblies must include the `_attributes/common-attributes.adoc` file near the -context statement. This file contains the standard attributes for the collection. -+ -`include::_attributes/common-attributes.adoc[leveloffset=+1]` - -* All anchor ids must follow the format: -+ ----- -[id="_{context}"] ----- -+ -Anchor name is _connected_ to the `{context}` using a dash. -+ -Example: -+ ----- -[id="creating-your-first-content_{context}"] ----- - -* All modules anchor ids must have the `{context}` variable. -+ -This is just reiterating the format described in the previous bullet point. - -* A comment section must be present at the top of each module and assembly, as -shown in the link:https://github.com/redhat-documentation/modular-docs/tree/master/modular-docs-manual/files[modular docs templates]. -+ -The modules comment section must list which assemblies this module has been -included in, while the assemblies comment section must include other assemblies -that it itself is included in, if any. -+ -Example comment section in an assembly: -+ ----- -// This assembly is included in the following assemblies: -// -// NONE ----- -+ -Example comment section in a module: -+ ----- -// Module included in the following assemblies: -// -// mod_docs_guide/mod-docs-conventions-ocp.adoc ----- - -* All modules must go in the modules directory which is present in the top level -of the openshift-docs repository. These modules must follow the file naming -conventions specified in the -link:https://redhat-documentation.github.io/modular-docs/[modular docs guidelines]. - -* All assemblies must go in the relevant guide/book. If you can't find a relevant - guide/book, reach out to a member of the OpenShift CCS team. So guides/books contain assemblies, which - contain modules. - -* modules and images folders are symlinked to the top level folder from each book/guide folder. - -* In your assemblies, when you are linking to the content in other books, you must -use the relative path starting like so: -+ ----- -xref:../architecture/architecture.adoc#architecture[architecture] overview. ----- -+ -[IMPORTANT] -==== -You must not include xrefs in modules or create an xref to a module. You can -only use xrefs to link from one assembly to another. -==== - -* All modules in assemblies must be included using the following format (replace 'ilude' with 'include'): -+ -`ilude::modules/.adoc[]` -+ -_OR_ -+ -`ilude::modules/.adoc[leveloffset=+]` -+ -if it requires a leveloffset. -+ -Example: -+ -`include::modules/creating-your-first-content.adoc[leveloffset=+1]` - -NOTE: There is no `..` at the starting of the path. - -//// -* If your assembly is in a subfolder of a guide/book directory, you must add a -statement to the assembly's metadata to use `relfileprefix`. -+ -This adjusts all the xref links in your modules to start from the root -directory. -+ -At the top of the assembly (in the metadata section), add the following line: -+ ----- -:relfileprefix: ../ ----- -+ -NOTE: There is a space between the second : and the ../. - -+ -The only difference in including a module in the _install_config/index.adoc_ -assembly and _install_config/install/planning.adoc_ assembly is the addition of -the `:relfileprefix: ../` attribute at the top of the -_install_config/install/planning.adoc_ assembly. The actual inclusion of -module remains the same as described in the previous bullet. - -+ -NOTE: This strategy is in place so that links resolve correctly on both -docs.openshift.com and portal docs. -//// - -* Do not use 3rd level folders even though AsciiBinder permits it. If you need -to, work out a better way to organize your content. diff --git a/modules/multi-architecture-scheduling-overview.adoc b/modules/multi-architecture-scheduling-overview.adoc deleted file mode 100644 index 4bc4c782604a..000000000000 --- a/modules/multi-architecture-scheduling-overview.adoc +++ /dev/null @@ -1,13 +0,0 @@ -// module included in the following assembly -// -//post_installation_configuration/configuring-multi-arch-compute-machines/multi-architecture-compute-managing.adoc - -:_mod-docs-content-type: CONCEPT -[id="multi-architecture-scheduling-overview_{context}"] -= Scheduling workloads on clusters with multi-architecture compute machines - -Before deploying a workload onto a cluster with compute nodes of different architectures, you must configure your compute node scheduling process so the pods in your cluster are correctly assigned. - -You can schedule workloads onto multi-architecture nodes for your cluster in several ways. For example, you can use a node affinity or a node selector to select the node you want the pod to schedule onto. You can also use scheduling mechanisms, like taints and tolderations, when using node affinity or node selector to correctly schedule workloads. - - diff --git a/modules/nbde-managing-encryption-keys.adoc b/modules/nbde-managing-encryption-keys.adoc deleted file mode 100644 index 25d0849f1a78..000000000000 --- a/modules/nbde-managing-encryption-keys.adoc +++ /dev/null @@ -1,10 +0,0 @@ -// Module included in the following assemblies: -// -// security/nbde-implementation-guide.adoc - -[id="nbde-managing-encryption-keys_{context}"] -= Tang server encryption key management - -The cryptographic mechanism to recreate the encryption key is based on the _blinded key_ stored on the node and the private key of the involved Tang servers. To protect against the possibility of an attacker who has obtained both the Tang server private key and the node’s encrypted disk, periodic rekeying is advisable. - -You must perform the rekeying operation for every node before you can delete the old key from the Tang server. The following sections provide procedures for rekeying and deleting old keys. diff --git a/modules/nw-multinetwork-sriov.adoc b/modules/nw-multinetwork-sriov.adoc deleted file mode 100644 index f0e45cb242ba..000000000000 --- a/modules/nw-multinetwork-sriov.adoc +++ /dev/null @@ -1,314 +0,0 @@ -// Module name: nw_multinetwork-sriov.adoc -// Module included in the following assemblies: -// -// * networking/managing_multinetworking.adoc - -:image-prefix: ose - -ifdef::openshift-origin[] -:image-prefix: origin -endif::openshift-origin[] - -[id="nw-multinetwork-sriov_{context}"] -= Configuring SR-IOV - -{product-title} includes the capability to use SR-IOV hardware on -{product-title} nodes, which enables you to attach SR-IOV virtual function (VF) -interfaces to Pods in addition to other network interfaces. - -Two components are required to provide this capability: the SR-IOV network -device plug-in and the SR-IOV CNI plug-in. - -* The SR-IOV network device plug-in is a Kubernetes device plug-in for -discovering, advertising, and allocating SR-IOV network virtual function (VF) -resources. Device plug-ins are used in Kubernetes to enable the use of limited -resources, typically in physical devices. Device plug-ins give the Kubernetes -scheduler awareness of which resources are exhausted, allowing Pods to be -scheduled to worker nodes that have sufficient resources available. - -* The SR-IOV CNI plug-in plumbs VF interfaces allocated from the SR-IOV device -plug-in directly into a Pod. - -== Supported Devices - -The following Network Interface Card (NIC) models are supported in -{product-title}: - -* Intel XXV710-DA2 25G card with vendor ID 0x8086 and device ID 0x158b -* Mellanox MT27710 Family [ConnectX-4 Lx] 25G card with vendor ID 0x15b3 -and device ID 0x1015 -* Mellanox MT27800 Family [ConnectX-5] 100G card with vendor ID 0x15b3 -and device ID 0x1017 - -[NOTE] -==== -For Mellanox cards, ensure that SR-IOV is enabled in the firmware before -provisioning VFs on the host. -==== - -== Creating SR-IOV plug-ins and daemonsets - -[NOTE] -==== -The creation of SR-IOV VFs is not handled by the SR-IOV device plug-in and -SR-IOV CNI. -To provision SR-IOV VF on hosts, you must configure it manually. -==== - -To use the SR-IOV network device plug-in and SR-IOV CNI plug-in, run both -plug-ins in daemon mode on each node in your cluster. - -. Create a YAML file for the `openshift-sriov` namespace with the following -contents: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Namespace -metadata: - name: openshift-sriov - labels: - name: openshift-sriov - openshift.io/run-level: "0" - annotations: - openshift.io/node-selector: "" - openshift.io/description: "Openshift SR-IOV network components" ----- - -. Run the following command to create the `openshift-sriov` namespace: -+ ----- -$ oc create -f openshift-sriov.yaml ----- - -. Create a YAML file for the `sriov-device-plugin` service account with the -following contents: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: sriov-device-plugin - namespace: openshift-sriov ----- - -. Run the following command to create the `sriov-device-plugin` service account: -+ ----- -$ oc create -f sriov-device-plugin.yaml ----- - -. Create a YAML file for the `sriov-cni` service account with the following -contents: -+ -[source,yaml] ----- -apiVersion: v1 -kind: ServiceAccount -metadata: - name: sriov-cni - namespace: openshift-sriov ----- - -. Run the following command to create the `sriov-cni` service account: -+ ----- -$ oc create -f sriov-cni.yaml ----- - -. Create a YAML file for the `sriov-device-plugin` DaemonSet with the following -contents: -+ -[NOTE] -==== -The SR-IOV network device plug-in daemon, when launched, will discover all the -configured SR-IOV VFs (of supported NIC models) on each node and advertise -discovered resources. The number of available SR-IOV VF resources that are -capable of being allocated can be reviewed by describing a node with the -[command]`oc describe node ` command. The resource name for the -SR-IOV VF resources is `openshift.io/sriov`. When no SR-IOV VFs are available on -the node, a value of zero is displayed. -==== -+ -[source,yaml,subs="attributes"] ----- -kind: DaemonSet -apiVersion: apps/v1 -metadata: - name: sriov-device-plugin - namespace: openshift-sriov - annotations: - kubernetes.io/description: | - This daemon set launches the SR-IOV network device plugin on each node. -spec: - selector: - matchLabels: - app: sriov-device-plugin - updateStrategy: - type: RollingUpdate - template: - metadata: - labels: - app: sriov-device-plugin - component: network - type: infra - openshift.io/component: network - spec: - hostNetwork: true - nodeSelector: - beta.kubernetes.io/os: linux - tolerations: - - operator: Exists - serviceAccountName: sriov-device-plugin - containers: - - name: sriov-device-plugin - image: quay.io/openshift/{image-prefix}-sriov-network-device-plugin:v4.0.0 - args: - - --log-level=10 - securityContext: - privileged: true - volumeMounts: - - name: devicesock - mountPath: /var/lib/kubelet/ - readOnly: false - - name: net - mountPath: /sys/class/net - readOnly: true - volumes: - - name: devicesock - hostPath: - path: /var/lib/kubelet/ - - name: net - hostPath: - path: /sys/class/net ----- - -. Run the following command to create the `sriov-device-plugin` DaemonSet: -+ ----- -oc create -f sriov-device-plugin.yaml ----- - -. Create a YAML file for the `sriov-cni` DaemonSet with the following contents: -+ -[source,yaml,subs="attributes"] ----- -kind: DaemonSet -apiVersion: apps/v1 -metadata: - name: sriov-cni - namespace: openshift-sriov - annotations: - kubernetes.io/description: | - This daemon set launches the SR-IOV CNI plugin on SR-IOV capable worker nodes. -spec: - selector: - matchLabels: - app: sriov-cni - updateStrategy: - type: RollingUpdate - template: - metadata: - labels: - app: sriov-cni - component: network - type: infra - openshift.io/component: network - spec: - nodeSelector: - beta.kubernetes.io/os: linux - tolerations: - - operator: Exists - serviceAccountName: sriov-cni - containers: - - name: sriov-cni - image: quay.io/openshift/{image-prefix}-sriov-cni:v4.0.0 - securityContext: - privileged: true - volumeMounts: - - name: cnibin - mountPath: /host/opt/cni/bin - volumes: - - name: cnibin - hostPath: - path: /var/lib/cni/bin ----- - -. Run the following command to create the `sriov-cni` DaemonSet: -+ ----- -$ oc create -f sriov-cni.yaml ----- - -== Configuring additional interfaces using SR-IOV - -. Create a YAML file for the Custom Resource (CR) with SR-IOV configuration. The -`name` field in the following CR has the value `sriov-conf`. -+ -[source,yaml] ----- -apiVersion: "k8s.cni.cncf.io/v1" -kind: NetworkAttachmentDefinition -metadata: - name: sriov-conf - annotations: - k8s.v1.cni.cncf.io/resourceName: openshift.io/sriov <1> -spec: - config: '{ - "type": "sriov", <2> - "name": "sriov-conf", - "ipam": { - "type": "host-local", - "subnet": "10.56.217.0/24", - "routes": [{ - "dst": "0.0.0.0/0" - }], - "gateway": "10.56.217.1" - } - }' ----- -+ -<1> `k8s.v1.cni.cncf.io/resourceName` annotation is set to `openshift.io/sriov`. -<2> `type` is set to `sriov`. - -. Run the following command to create the `sriov-conf` CR: -+ ----- -$ oc create -f sriov-conf.yaml ----- - -. Create a YAML file for a Pod which references the name of the -`NetworkAttachmentDefinition` and requests one `openshift.io/sriov` resource: -+ -[source,yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: sriovsamplepod - annotations: - k8s.v1.cni.cncf.io/networks: sriov-conf -spec: - containers: - - name: sriovsamplepod - command: ["/bin/bash", "-c", "sleep 2000000000000"] - image: centos/tools - resources: - requests: - openshift.io/sriov: '1' - limits: - openshift.io/sriov: '1' ----- - -. Run the following command to create the `sriovsamplepod` Pod: -+ ----- -$ oc create -f sriovsamplepod.yaml ----- - -. View the additional interface by executing the `ip` command: -+ ----- -$ oc exec sriovsamplepod -- ip a ----- diff --git a/modules/nw-pdncc-view.adoc b/modules/nw-pdncc-view.adoc deleted file mode 100644 index e69de29bb2d1..000000000000 diff --git a/modules/nw-pod-network-connectivity-configuration.adoc b/modules/nw-pod-network-connectivity-configuration.adoc deleted file mode 100644 index db8af9db2dc9..000000000000 --- a/modules/nw-pod-network-connectivity-configuration.adoc +++ /dev/null @@ -1,48 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/verifying-connectivity-endpoint.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-pod-network-connectivity-configuration_{context}"] -= Configuring pod connectivity check placement - -As a cluster administrator, you can configure which nodes the connectivity check source and target pods run by modifying the `network.config.openshift.io` object named `cluster`. - -.Prerequisites - -* Install the {oc-first}. - -.Procedure - -. Edit the connectivity check configuration by entering the following command: -+ -[source,terminal] ----- -$ oc edit network.config.openshift.io cluster ----- - -. In the text editor, update the `networkDiagnostics` stanza to specify the node selectors that you want for the source and target pods. - -. Save your changes and exit the text editor. - -.Verification - -* Verify that the source and target pods are running on the intended nodes by entering the following command: - -[source,terminal] ----- -$ oc get pods -n openshift-network-diagnostics -o wide ----- - -.Example output -[source,text] ----- -NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES -network-check-source-84c69dbd6b-p8f7n 1/1 Running 0 9h 10.131.0.8 ip-10-0-40-197.us-east-2.compute.internal -network-check-target-46pct 1/1 Running 0 9h 10.131.0.6 ip-10-0-40-197.us-east-2.compute.internal -network-check-target-8kwgf 1/1 Running 0 9h 10.128.2.4 ip-10-0-95-74.us-east-2.compute.internal -network-check-target-jc6n7 1/1 Running 0 9h 10.129.2.4 ip-10-0-21-151.us-east-2.compute.internal -network-check-target-lvwnn 1/1 Running 0 9h 10.128.0.7 ip-10-0-17-129.us-east-2.compute.internal -network-check-target-nslvj 1/1 Running 0 9h 10.130.0.7 ip-10-0-89-148.us-east-2.compute.internal -network-check-target-z2sfx 1/1 Running 0 9h 10.129.0.4 ip-10-0-60-253.us-east-2.compute.internal ----- diff --git a/modules/nw-secondary-ext-gw-status.adoc b/modules/nw-secondary-ext-gw-status.adoc deleted file mode 100644 index 24c792804080..000000000000 --- a/modules/nw-secondary-ext-gw-status.adoc +++ /dev/null @@ -1,45 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/ovn_kubernetes_network_provider/configuring-secondary-external-gateway.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-secondary-ext-gw-status_{context}"] -= View the status of an external gateway - -You can view the status of an external gateway that is configured for your cluster. The `status` field for the `AdminPolicyBasedExternalRoute` custom resource reports recent status messages whenever you update the resource, subject to a few limitations: - -- Namespaces impacted are not reported in status messages -- Pods selected as part of a dynamic next hop configuration do not trigger status updates as a result of pod lifecycle events, such as pod termination - -.Prerequisites - -* You installed the OpenShift CLI (`oc`). -* You are logged in to the cluster with a user with `cluster-admin` privileges. - -.Procedure - -* To access the status logs for a secondary external gateway, enter the following command: -+ -[source,terminal] ----- -$ oc get adminpolicybasedexternalroutes -o yaml ----- -+ --- -where: - -``:: Specifies the name of an `AdminPolicyBasedExternalRoute` object. --- -+ -.Example output -[source,text] ----- -... -Status: - Last Transition Time: 2023-04-24T14:49:45Z - Messages: - Configured external gateway IPs: 172.18.0.8,172.18.0.9 - Configured external gateway IPs: 172.18.0.8 - Status: Success -Events: ----- diff --git a/modules/nw-sriov-about-all-multi-cast_mode.adoc b/modules/nw-sriov-about-all-multi-cast_mode.adoc deleted file mode 100644 index 57ee39fac70e..000000000000 --- a/modules/nw-sriov-about-all-multi-cast_mode.adoc +++ /dev/null @@ -1,21 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/hardware_networks/configuring-interface-sysctl-sriov-device.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-about-all-one-sysctl-flag_{context}"] -= Setting one sysctl flag - -You can set interface-level network `sysctl` settings for a pod connected to a SR-IOV network device. - -In this example, `net.ipv4.conf.IFNAME.accept_redirects` is set to `1` on the created virtual interfaces. - -The `sysctl-tuning-test` namespace is used in this example. - -.Procedure -* Use the following command to create the `sysctl-tuning-test` namespace: -+ -[source,terminal] ----- -$ oc create namespace sysctl-tuning-test ----- \ No newline at end of file diff --git a/modules/persistent-storage-csi-cloning-using.adoc b/modules/persistent-storage-csi-cloning-using.adoc deleted file mode 100644 index e28e7e1e185e..000000000000 --- a/modules/persistent-storage-csi-cloning-using.adoc +++ /dev/null @@ -1,32 +0,0 @@ -// Module included in the following assemblies: -// -// * storage/container_storage_interface/persistent-storage-csi-cloning.adoc - -[id="persistent-storage-csi-cloning-using_{context}"] -= Using a cloned PVC as a storage volume - -A newly cloned persistent volume claim (PVC) can be consumed, cloned, snapshotted, or deleted independently of its original `dataSource` PVC. - -Pods can access storage by using the cloned PVC as a volume. For example: - -.Use CSI volume clone in the Pod -[source,yaml] ----- -kind: Pod -apiVersion: v1 -metadata: - name: mypod -spec: - containers: - - name: myfrontend - image: dockerfile/nginx - volumeMounts: - - mountPath: "/var/www/html" - name: mypd - volumes: - - name: mypd - persistentVolumeClaim: - claimName: pvc-1-clone <1> ----- - -<1> The cloned PVC created during the CSI volume cloning operation. diff --git a/modules/providing-direct-documentation-feedback.adoc b/modules/providing-direct-documentation-feedback.adoc deleted file mode 100644 index b4bd0eab4840..000000000000 --- a/modules/providing-direct-documentation-feedback.adoc +++ /dev/null @@ -1,24 +0,0 @@ -:_module-type: CONCEPT - -[id="providing-direct-documentation-feedback_{context}"] -= Providing feedback on Red Hat documentation - -[role="_abstract"] -We appreciate your feedback on our technical content and encourage you to tell us what you think. -If you'd like to add comments, provide insights, correct a typo, or even ask a question, you can do so directly in the documentation. - -[NOTE] -==== -You must have a Red Hat account and be logged in to the customer portal. -==== - -To submit documentation feedback from the customer portal, do the following: - -. Select the *Multi-page HTML* format. -. Click the *Feedback* button at the top-right of the document. -. Highlight the section of text where you want to provide feedback. -. Click the *Add Feedback* dialog next to your highlighted text. -. Enter your feedback in the text box on the right of the page and then click *Submit*. - -We automatically create a tracking issue each time you submit feedback. -Open the link that is displayed after you click *Submit* and start watching the issue or add more comments. diff --git a/modules/rbac-updating-policy-definitions.adoc b/modules/rbac-updating-policy-definitions.adoc deleted file mode 100644 index 1a2e45a62e90..000000000000 --- a/modules/rbac-updating-policy-definitions.adoc +++ /dev/null @@ -1,57 +0,0 @@ -// Module included in the following assemblies: -// -// * orphaned - -ifdef::openshift-enterprise,openshift-webscale,openshift-origin[] -[id="updating-policy-definitions_{context}"] -= Updating policy definitions - -During a cluster upgrade, and on every restart of any master, the -default cluster roles are automatically reconciled to restore any missing permissions. - -If you customized default cluster roles and want to ensure a role reconciliation -does not modify them, you must take the following actions. - -.Procedure - -. Protect each role from reconciliation: -+ ----- -$ oc annotate clusterrole.rbac --overwrite rbac.authorization.kubernetes.io/autoupdate=false ----- -+ -[WARNING] -==== -You must manually update the roles that contain this setting to include any new -or required permissions after upgrading. -==== - -. Generate a default bootstrap policy template file: -+ ----- -$ oc adm create-bootstrap-policy-file --filename=policy.json ----- -+ -[NOTE] -==== -The contents of the file vary based on the {product-title} version, but the file -contains only the default policies. -==== - -. Update the *_policy.json_* file to include any cluster role customizations. - -. Use the policy file to automatically reconcile roles and role bindings that -are not reconcile protected: -+ ----- -$ oc auth reconcile -f policy.json ----- - -. Reconcile Security Context Constraints: -+ ----- -# oc adm policy reconcile-sccs \ - --additive-only=true \ - --confirm ----- -endif::[] diff --git a/modules/running-modified-installation.adoc b/modules/running-modified-installation.adoc deleted file mode 100644 index f2c75c4d0d68..000000000000 --- a/modules/running-modified-installation.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="running-modified-installation_{context}"] -= Running a modified {product-title} installation - -Running a default {product-title} {product-version} cluster is the best way to ensure that the {product-title} cluster you get will be easy to install, maintain, and upgrade going forward. However, because you may want to add to or change your {product-title} cluster, openshift-install offers several ways to modify the default installation or add to it later. These include: - -* Creating an install-config file: Changing the contents of the install-config file, to identify things like the cluster name and credentials, is fully supported. -* Creating ignition-config files: Viewing ignition-config files, which define how individual nodes are configured when they are first deployed, is fully supported. However, changing those files is not supported. -* Creating Kubernetes (manifests) and {product-title} (openshift) manifest files: You can view manifest files in the manifests and openshift directories to see how Kubernetes and {product-title} features are configured, respectively. Changing those files is not supported. - -Whether you want to change your {product-title} installation or simply gain a deeper understanding of the details of the installation process, the goal of this section is to step you through an {product-title} installation. Along the way, it covers: - -* The underlying activities that go on under the covers to bring up an {product-title} cluster -* Major components that are leveraged ({op-system}, Ignition, Terraform, and so on) -* Opportunities to customize the install process (install configs, Ignition configs, manifests, and so on) diff --git a/modules/service-accounts-adding-secrets.adoc b/modules/service-accounts-adding-secrets.adoc deleted file mode 100644 index 11d925ea62c7..000000000000 --- a/modules/service-accounts-adding-secrets.adoc +++ /dev/null @@ -1,70 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/using-service-accounts.adoc - -[id="service-accounts-managing-secrets_{context}"] -== Managing secrets on a service account's pod - -In addition to providing API credentials, a pod's service account determines -which secrets the pod is allowed to use. - -Pods use secrets in two ways: - -* image pull secrets, providing credentials used to pull images for the pod's containers -* mountable secrets, injecting the contents of secrets into containers as files - -To allow a secret to be used as an image pull secret by a service account's -pods, run: - ----- -$ oc secrets link --for=pull ----- - -To allow a secret to be mounted by a service account's pods, run: - ----- -$ oc secrets link --for=mount ----- - -[NOTE] -==== -Limiting secrets to only the service accounts that reference them is disabled by -default. This means that if `serviceAccountConfig.limitSecretReferences` is set -to `false` (the default setting) in the master configuration file, mounting -secrets to a service account's pods with the `--for=mount` option is not -required. However, using the `--for=pull` option to enable using an image pull -secret is required, regardless of the -`serviceAccountConfig.limitSecretReferences` value. -==== - -This example creates and adds secrets to a service account: - ----- -$ oc create secret generic secret-plans \ - --from-file=plan1.txt \ - --from-file=plan2.txt -secret/secret-plans - -$ oc create secret docker-registry my-pull-secret \ - --docker-username=mastermind \ - --docker-password=12345 \ - --docker-email=mastermind@example.com -secret/my-pull-secret - -$ oc secrets link robot secret-plans --for=mount - -$ oc secrets link robot my-pull-secret --for=pull - -$ oc describe serviceaccount robot -Name: robot -Labels: -Image pull secrets: robot-dockercfg-624cx - my-pull-secret - -Mountable secrets: robot-token-uzkbh - robot-dockercfg-624cx - secret-plans - -Tokens: robot-token-8bhpp - robot-token-uzkbh ----- diff --git a/modules/service-accounts-managing-secrets.adoc b/modules/service-accounts-managing-secrets.adoc deleted file mode 100644 index cae0fb9bf790..000000000000 --- a/modules/service-accounts-managing-secrets.adoc +++ /dev/null @@ -1,65 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/using-service-accounts.adoc - -[id="service-accounts-managing-secrets_{context}"] -= Managing allowed secrets - -You can use the service account's secrets in your application's pods for: - -* Image pull secrets, providing credentials used to pull images for the pod's containers -* Mountable secrets, injecting the contents of secrets into containers as files - -.Procedure - -. Create a secret: -+ ----- -$ oc create secret generic \ - --from-file=.txt - -secret/ ----- - -. To allow a secret to be used as an image pull secret by a service account's -pods, run: -+ ----- -$ oc secrets link --for=pull ----- - -. To allow a secret to be mounted by a service account's pods, run: -+ ----- -$ oc secrets link --for=mount ----- - -. Confirm that the secret was added to the service account: -+ ----- -$ oc describe serviceaccount -Name: -Labels: -Image pull secrets: robot-dockercfg-624cx - my-pull-secret - -Mountable secrets: robot-token-uzkbh - robot-dockercfg-624cx - secret-plans - -Tokens: robot-token-8bhpp - robot-token-uzkbh ----- - -//// -[NOTE] -==== -Limiting secrets to only the service accounts that reference them is disabled by -default. This means that if `serviceAccountConfig.limitSecretReferences` is set -to `false` (the default setting) in the master configuration file, mounting -secrets to a service account's pods with the `--for=mount` option is not -required. However, using the `--for=pull` option to enable using an image pull -secret is required, regardless of the -`serviceAccountConfig.limitSecretReferences` value. -==== -//// diff --git a/modules/sts-mode-installing-manual-run-installer.adoc b/modules/sts-mode-installing-manual-run-installer.adoc deleted file mode 100644 index e81bc3c9cc63..000000000000 --- a/modules/sts-mode-installing-manual-run-installer.adoc +++ /dev/null @@ -1,66 +0,0 @@ -// Module included in the following assemblies: -// -// * authentication/managing_cloud_provider_credentials/cco-mode-sts.adoc -// * authentication/managing_cloud_provider_credentials/cco-mode-gcp-workload-identity.adoc - -:_mod-docs-content-type: PROCEDURE -[id="sts-mode-installing-manual-run-installer_{context}"] -= Running the installer - -.Prerequisites - -* Configure an account with the cloud platform that hosts your cluster. -* Obtain the {product-title} release image. - -.Procedure - -. Change to the directory that contains the installation program and create the `install-config.yaml` file: -+ -[source,terminal] ----- -$ openshift-install create install-config --dir ----- -+ -where `` is the directory in which the installation program creates files. - -. Edit the `install-config.yaml` configuration file so that it contains the `credentialsMode` parameter set to `Manual`. -+ -.Example `install-config.yaml` configuration file -[source,yaml] ----- -apiVersion: v1 -baseDomain: cluster1.example.com -credentialsMode: Manual <1> -compute: -- architecture: amd64 - hyperthreading: Enabled ----- -<1> This line is added to set the `credentialsMode` parameter to `Manual`. - -. Create the required {product-title} installation manifests: -+ -[source,terminal] ----- -$ openshift-install create manifests ----- - -. Copy the manifests that `ccoctl` generated to the manifests directory that the installation program created: -+ -[source,terminal,subs="+quotes"] ----- -$ cp //manifests/* ./manifests/ ----- - -. Copy the `tls` directory containing the private key that the `ccoctl` generated to the installation directory: -+ -[source,terminal,subs="+quotes"] ----- -$ cp -a //tls . ----- - -. Run the {product-title} installer: -+ -[source,terminal] ----- -$ ./openshift-install create cluster ----- diff --git a/modules/understanding-installation.adoc b/modules/understanding-installation.adoc deleted file mode 100644 index dbd19c82853d..000000000000 --- a/modules/understanding-installation.adoc +++ /dev/null @@ -1,8 +0,0 @@ -// Module included in the following assemblies: -// -// * TBD - -[id="understanding-installation_{context}"] -= Understanding {product-title} installation - -{product-title} installation is designed to quickly spin up an {product-title} cluster, with the user starting the cluster required to provide as little information as possible. diff --git a/modules/updating-troubleshooting-clear.adoc b/modules/updating-troubleshooting-clear.adoc deleted file mode 100644 index 8cbda3c63614..000000000000 --- a/modules/updating-troubleshooting-clear.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module included in the following assemblies: -// -// * updating/troubleshooting_updates/recovering-update-before-applied.adoc - -[id="updating-troubleshooting-clear_{context}"] -= Recovering when an update fails before it is applied - -If an update fails before it is applied, such as when the version that you specify cannot be found, you can cancel the update: - -[source,terminal] ----- -$ oc adm upgrade --clear ----- - -[IMPORTANT] -==== -If an update fails at any other point, you must contact Red Hat support. Rolling your cluster back to a previous version is not supported. -==== \ No newline at end of file diff --git a/modules/virt-early-access-releases.adoc b/modules/virt-early-access-releases.adoc deleted file mode 100644 index 86feca8f9eb6..000000000000 --- a/modules/virt-early-access-releases.adoc +++ /dev/null @@ -1,18 +0,0 @@ -// Module included in the following assemblies: -// -// * virt/updating/upgrading-virt.adoc - -:_mod-docs-content-type: CONCEPT -[id="virt-early-access-releases_{context}"] -= Early access releases - -You can gain access to builds in development by subscribing to the *candidate* update channel for your version of {VirtProductName}. These releases have not been fully tested by Red{nbsp}Hat and are not supported, but you can use them on non-production clusters to test capabilities and bug fixes being developed for that version. - -The *stable* channel, which matches the underlying {product-title} version and is fully tested, is suitable for production systems. You can switch between the *stable* and *candidate* channels in Operator Hub. However, updating from a *candidate* channel release to a *stable* channel release is not tested by Red{nbsp}Hat. - -Some candidate releases are promoted to the *stable* channel. However, releases present only in *candidate* channels might not contain all features that will be made generally available (GA), and some features in candidate builds might be removed before GA. Additionally, candidate releases might not offer update paths to later GA releases. - -[IMPORTANT] -==== -The candidate channel is only suitable for testing purposes where destroying and recreating a cluster is acceptable. -==== \ No newline at end of file diff --git a/modules/virt-importing-vm-wizard.adoc b/modules/virt-importing-vm-wizard.adoc deleted file mode 100644 index a8b119dcc336..000000000000 --- a/modules/virt-importing-vm-wizard.adoc +++ /dev/null @@ -1,150 +0,0 @@ -// Module included in the following assemblies: -// -// * virt/virtual_machines/importing_vms/virt-importing-vmware-vm.adoc -// * virt/virtual_machines/importing_vms/virt-importing-rhv-vm.adoc - -[id="virt-importing-vm-wizard_{context}"] -= Importing a virtual machine with the VM Import wizard - -You can import a single virtual machine with the VM Import wizard. - -ifdef::virt-importing-vmware-vm[] -You can also import a VM template. If you import a VM template, {VirtProductName} creates a virtual machine based on the template. - -.Prerequisites - -* You must have admin user privileges. -* The VMware Virtual Disk Development Kit (VDDK) image must be in an image registry that is accessible to your {VirtProductName} environment. -* The VDDK image must be added to the `spec.vddkInitImage` field of the `HyperConverged` custom resource (CR). -* The VM must be powered off. -* Virtual disks must be connected to IDE or SCSI controllers. If virtual disks are connected to a SATA controller, you can change them to IDE controllers and then migrate the VM. -* The {VirtProductName} local and shared persistent storage classes must support VM import. -* The {VirtProductName} storage must be large enough to accommodate the virtual disk. -+ -[WARNING] -==== -If you are using Ceph RBD block-mode volumes, the storage must be large enough to accommodate the virtual disk. If the disk is too large for the available storage, the import process fails and the PV that is used to copy the virtual disk is not released. You will not be able to import another virtual machine or to clean up the storage because there are insufficient resources to support object deletion. To resolve this situation, you must add more object storage devices to the storage back end. -==== - -* The {VirtProductName} egress network policy must allow the following traffic: -+ -[cols="1,1,1" options="header"] -|=== -|Destination |Protocol |Port -|VMware ESXi hosts |TCP |443 -|VMware ESXi hosts |TCP |902 -|VMware vCenter |TCP |5840 -|=== -endif::[] - -.Procedure - -. In the web console, click *Workloads* -> *Virtual Machines*. -. Click *Create Virtual Machine* and select *Import with Wizard*. -ifdef::virt-importing-vmware-vm[] -. Select *VMware* from the *Provider* list. -. Select *Connect to New Instance* or a saved vCenter instance. - -* If you select *Connect to New Instance*, enter the *vCenter hostname*, *Username*, and *Password*. -* If you select a saved vCenter instance, the wizard connects to the vCenter instance using the saved credentials. - -. Click *Check and Save* and wait for the connection to complete. -+ -[NOTE] -==== -The connection details are stored in a secret. If you add a provider with an incorrect hostname, user name, or password, click *Workloads* -> *Secrets* and delete the provider secret. -==== - -. Select a virtual machine or a template. -endif::[] -ifdef::virt-importing-rhv-vm[] -. Select *Red Hat Virtualization (RHV)* from the *Provider* list. -. Select *Connect to New Instance* or a saved RHV instance. - -* If you select *Connect to New Instance*, fill in the following fields: - -** *API URL*: For example, `\https:///ovirt-engine/api` -** *CA certificate*: Click *Browse* to upload the RHV Manager CA certificate or paste the CA certificate into the field. -+ -View the CA certificate by running the following command: -+ -[source,terminal] ----- -$ openssl s_client -connect :443 -showcerts < /dev/null ----- -+ -The CA certificate is the second certificate in the output. - -** *Username*: RHV Manager user name, for example, `ocpadmin@internal` -** *Password*: RHV Manager password - -* If you select a saved RHV instance, the wizard connects to the RHV instance using the saved credentials. - -. Click *Check and Save* and wait for the connection to complete. -+ -[NOTE] -==== -The connection details are stored in a secret. If you add a provider with an incorrect URL, user name, or password, click *Workloads* -> *Secrets* and delete the provider secret. -==== - -. Select a cluster and a virtual machine. -endif::[] -. Click *Next*. -. In the *Review* screen, review your settings. -// RHV import options -ifdef::virt-importing-rhv-vm[] -. Optional: You can select *Start virtual machine on creation*. -endif::[] - -. Click *Edit* to update the following settings: - -ifdef::virt-importing-rhv-vm[] -* *General* -> *Name*: The VM name is limited to 63 characters. -* *General* -> *Description*: Optional description of the VM. -** *Storage Class*: Select *NFS* or *ocs-storagecluster-ceph-rbd*. -+ -If you select *ocs-storagecluster-ceph-rbd*, you must set the *Volume Mode* of the disk to *Block*. - -** *Advanced* -> *Volume Mode*: Select *Block*. -* *Advanced* -> *Volume Mode*: Select *Block*. -* *Networking* -> *Network*: You can select a network from a list of available network attachment definition objects. -endif::[] -ifdef::virt-importing-vmware-vm[] -* *General*: -** *Description* -** *Operating System* -** *Flavor* -** *Memory* -** *CPUs* -** *Workload Profile* - -* *Networking*: -** *Name* -** *Model* -** *Network* -** *Type* -** *MAC Address* - -* *Storage*: Click the Options menu {kebab} of the VM disk and select *Edit* to update the following fields: -** *Name* -** *Source*: For example, *Import Disk*. -** *Size* -** *Interface* -** *Storage Class*: Select *NFS* or *ocs-storagecluster-ceph-rbd (ceph-rbd)*. -+ -If you select *ocs-storagecluster-ceph-rbd*, you must set the *Volume Mode* of the disk to *Block*. -+ -Other storage classes might work, but they are not officially supported. - -** *Advanced* -> *Volume Mode*: Select *Block*. -** *Advanced* -> *Access Mode* - -* *Advanced* -> *Cloud-init*: -** *Form*: Enter the *Hostname* and *Authenticated SSH Keys*. -** *Custom script*: Enter the `cloud-init` script in the text field. - -* *Advanced* -> *Virtual Hardware*: You can attach a virtual CD-ROM to the imported virtual machine. -endif::[] -. Click *Import* or *Review and Import*, if you have edited the import settings. -+ -A *Successfully created virtual machine* message and a list of resources created for the virtual machine are displayed. The virtual machine appears in *Workloads* -> *Virtual Machines*.