diff --git a/modules/compliance-imagestreams.adoc b/modules/compliance-imagestreams.adoc deleted file mode 100644 index 80e9085635a6..000000000000 --- a/modules/compliance-imagestreams.adoc +++ /dev/null @@ -1,70 +0,0 @@ -// Module included in the following assemblies: -// -// * security/compliance_operator/co-management/compliance-operator-manage.adoc - -:_mod-docs-content-type: PROCEDURE -[id="compliance-imagestreams_{context}"] -= Using image streams - -The `contentImage` reference points to a valid `ImageStreamTag`, and the Compliance Operator ensures that the content stays up to date automatically. - -[NOTE] -==== -`ProfileBundle` objects also accept `ImageStream` references. -==== - -.Example image stream -[source,terminal] ----- -$ oc get is -n openshift-compliance ----- - -.Example output -[source,terminal] ----- -NAME IMAGE REPOSITORY TAGS UPDATED -openscap-ocp4-ds image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds latest 32 seconds ago ----- - -.Procedure -. Ensure that the lookup policy is set to local: -+ -[source,terminal] ----- -$ oc patch is openscap-ocp4-ds \ - -p '{"spec":{"lookupPolicy":{"local":true}}}' \ - --type=merge - imagestream.image.openshift.io/openscap-ocp4-ds patched - -n openshift-compliance ----- - -. Use the name of the `ImageStreamTag` for the `ProfileBundle` by retrieving the `istag` name: -+ -[source,terminal] ----- -$ oc get istag -n openshift-compliance ----- -+ -.Example output -[source,terminal] ----- -NAME IMAGE REFERENCE UPDATED -openscap-ocp4-ds:latest image-registry.openshift-image-registry.svc:5000/openshift-compliance/openscap-ocp4-ds@sha256:46d7ca9b7055fe56ade818ec3e62882cfcc2d27b9bf0d1cbae9f4b6df2710c96 3 minutes ago ----- - -. Create the `ProfileBundle`: -+ -[source,terminal] ----- -$ cat << EOF | oc create -f - -apiVersion: compliance.openshift.io/v1alpha1 -kind: ProfileBundle -metadata: - name: mybundle - spec: - contentImage: openscap-ocp4-ds:latest - contentFile: ssg-rhcos4-ds.xml -EOF ----- - -This `ProfileBundle` will track the image and any changes that are applied to it, such as updating the tag to point to a different hash, will immediately be reflected in the `ProfileBundle`. diff --git a/modules/nbde-installing-nbde-with-ztp.adoc b/modules/nbde-installing-nbde-with-ztp.adoc deleted file mode 100644 index 45c1e4898222..000000000000 --- a/modules/nbde-installing-nbde-with-ztp.adoc +++ /dev/null @@ -1,38 +0,0 @@ -// Module included in the following assemblies: -// -// security/nbde-implementation-guide.adoc - -[id="nbde-installing-nbde-with-ztp_{context}"] -= Installing NBDE with {ztp} - -{ztp-first} provides the capability to install Network-Bound Disk Encryption (NBDE) and enable disk encryption at cluster installation through SiteConfig. Use the automated SiteConfig method when you are enabling disk encryption on multiple managed clusters. - -You can specify disk encryption with a list of Tang server URLs and associated thumbprints in the site plan that contains the configuration for the site installation. The site plan generates the appropriate corresponding ignition manifest along with the other day-0 manifests and applies them to the hub cluster. - -.Example `SiteConfig` custom resource (CR) containing a disk encryption specification -[source,yaml] ----- -apiVersion: ran.openshift.io/v1 -kind: SiteConfig -metadata: - name: "site-plan-sno-du-ex" - namespace: "clusters-sub" -spec: - baseDomain: "example.com" - ... - clusters: - - clusterName: "du-sno-ex" - clusterType: sno - clusterProfile: du - ... - diskEncryption: - type: "nbde" - tang: - - url: "http://10.0.0.1:7500" - thumbprint: "1c3wJKh6TQKTghTjWgS4MlIXtGk" - - url: "http://10.0.0.2:7500" - thumbprint: "WOjQYkyK7DxY_T5pMncMO5w0f6E" - … - nodes: - - hostName: "host.domain.example.com" ----- diff --git a/modules/network-observability-roles-create.adoc b/modules/network-observability-roles-create.adoc deleted file mode 100644 index c47d254be4c7..000000000000 --- a/modules/network-observability-roles-create.adoc +++ /dev/null @@ -1,46 +0,0 @@ -// Module included in the following assemblies: - -// * networking/network_observability/installing-operators.adoc - -:_mod-docs-content-type: PROCEDURE -[id="network-observability-roles-create_{context}"] -= Create roles for authentication and authorization - -Specify authentication and authorization configurations by defining `ClusterRole` and `ClusterRoleBinding`. You can create a YAML file to define these roles. - -.Procedure - -. Using the web console, click the Import icon, *+*. -. Drop your YAML file into the editor and click *Create*: -+ -[source, yaml] ----- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRole -metadata: - name: loki-netobserv-tenant -rules: -- apiGroups: - - 'loki.grafana.com' - resources: - - network - resourceNames: - - logs - verbs: - - 'get' - - 'create' ---- -apiVersion: rbac.authorization.k8s.io/v1 -kind: ClusterRoleBinding -metadata: - name: loki-netobserv-tenant -roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: loki-netobserv-tenant -subjects: -- kind: ServiceAccount - name: flowlogs-pipeline <1> - namespace: netobserv ----- -<1> The `flowlogs-pipeline` writes to Loki. If you are using Kafka, this value is `flowlogs-pipeline-transformer`. diff --git a/modules/networking-osp-enabling-metadata.adoc b/modules/networking-osp-enabling-metadata.adoc deleted file mode 100644 index 054228b149cf..000000000000 --- a/modules/networking-osp-enabling-metadata.adoc +++ /dev/null @@ -1,66 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_openstack/installing-openstack-user.adoc - -:_mod-docs-content-type: PROCEDURE -[id="networking-osp-enabling-metadata_{context}"] -= Enabling the {rh-openstack} metadata service as a mountable drive - -You can apply a machine config to your machine pool that makes the {rh-openstack-first} metadata service available as a mountable drive. - -[NOTE] -==== -The following machine config enables the display of {rh-openstack} network UUIDs from within the SR-IOV Network Operator. This configuration simplifies the association of SR-IOV resources to cluster SR-IOV resources. -==== - -.Procedure - -. Create a machine config file from the following template: -+ -.A mountable metadata service machine config file -[source,yaml] ----- -kind: MachineConfig -apiVersion: machineconfiguration.openshift.io/v1 -metadata: - name: 20-mount-config <1> - labels: - machineconfiguration.openshift.io/role: worker -spec: - config: - ignition: - version: 3.2.0 - systemd: - units: - - name: create-mountpoint-var-config.service - enabled: true - contents: | - [Unit] - Description=Create mountpoint /var/config - Before=kubelet.service - - [Service] - ExecStart=/bin/mkdir -p /var/config - - [Install] - WantedBy=var-config.mount - - - name: var-config.mount - enabled: true - contents: | - [Unit] - Before=local-fs.target - [Mount] - Where=/var/config - What=/dev/disk/by-label/config-2 - [Install] - WantedBy=local-fs.target ----- -<1> You can substitute a name of your choice. - -. From a command line, apply the machine config: -+ -[source,terminal] ----- -$ oc apply -f .yaml ----- diff --git a/modules/networking-osp-enabling-vfio-noiommu.adoc b/modules/networking-osp-enabling-vfio-noiommu.adoc deleted file mode 100644 index 0b2cfbdde14e..000000000000 --- a/modules/networking-osp-enabling-vfio-noiommu.adoc +++ /dev/null @@ -1,42 +0,0 @@ -// Module included in the following assemblies: -// -// * installing/installing_openstack/installing-openstack-user.adoc - -:_mod-docs-content-type: PROCEDURE -[id="networking-osp-enabling-vfio-noiommu_{context}"] -= Enabling the No-IOMMU feature for the {rh-openstack} VFIO driver - -You can apply a machine config to your machine pool that enables the No-IOMMU feature for the {rh-openstack-first} virtual function I/O (VFIO) driver. The {rh-openstack} vfio-pci driver requires this feature. - -.Procedure - -. Create a machine config file from the following template: -+ -.A No-IOMMU VFIO machine config file -[source,yaml] ----- -kind: MachineConfig -apiVersion: machineconfiguration.openshift.io/v1 -metadata: - name: 99-vfio-noiommu <1> - labels: - machineconfiguration.openshift.io/role: worker -spec: - config: - ignition: - version: 3.2.0 - storage: - files: - - path: /etc/modprobe.d/vfio-noiommu.conf - mode: 0644 - contents: - source: data:;base64,b3B0aW9ucyB2ZmlvIGVuYWJsZV91bnNhZmVfbm9pb21tdV9tb2RlPTEK ----- -<1> You can substitute a name of your choice. - -. From a command line, apply the machine config: -+ -[source,terminal] ----- -$ oc apply -f .yaml ----- diff --git a/modules/networking-osp-preparing-for-sr-iov.adoc b/modules/networking-osp-preparing-for-sr-iov.adoc deleted file mode 100644 index ddf436ddc004..000000000000 --- a/modules/networking-osp-preparing-for-sr-iov.adoc +++ /dev/null @@ -1,4 +0,0 @@ -[id="networking-osp-preparing-for-sr-iov_{context}"] -= Preparing a cluster that runs on {rh-openstack} for SR-IOV - -Before you use link:https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/16.1/html-single/network_functions_virtualization_planning_and_configuration_guide/index#assembly_sriov_parameters[single root I/O virtualization (SR-IOV)] on a cluster that runs on {rh-openstack-first}, make the {rh-openstack} metadata service mountable as a drive and enable the No-IOMMU Operator for the virtual function I/O (VFIO) driver. \ No newline at end of file diff --git a/modules/nw-egress-router-pod-ovn.adoc b/modules/nw-egress-router-pod-ovn.adoc deleted file mode 100644 index 5f73fef96a0a..000000000000 --- a/modules/nw-egress-router-pod-ovn.adoc +++ /dev/null @@ -1,47 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/ovn_kubernetes_network_provider/deploying-egress-router-ovn-redirection.adoc - -ifeval::["{context}" == "deploying-egress-router-ovn-redirection"] -:redirect: -:router-type: redirect -// Like nw-egress-router-pod, monitor bz-1896170 -:egress-pod-image-name: registry.redhat.com/openshift3/ose-pod -endif::[] - -// Images are different for OKD -ifdef::openshift-origin[] - -ifdef::redirect[] -:egress-pod-image-name: quay.io/openshift/origin-pod -endif::[] - -endif::[] - -[id="nw-egress-router-cni-pod_{context}"] -= Egress router pod specification for {router-type} mode - -After you create a network attachment definition, you add a pod that references the definition. - -.Example egress router pod specification -[source,yaml,subs="attributes+"] ----- -apiVersion: v1 -kind: Pod -metadata: - name: egress-router-pod - annotations: - k8s.v1.cni.cncf.io/networks: egress-router-redirect <1> -spec: - containers: - - name: egress-router-pod - image: {egress-pod-image-name} ----- -<1> The specified network must match the name of the network attachment definition. You can specify a namespace, interface name, or both, by replacing the values in the following pattern: `/@`. By default, Multus adds a secondary network interface to the pod with a name such as `net1`, `net2`, and so on. - -// Clear temporary attributes -:!router-type: -:!egress-pod-image-name: -ifdef::redirect[] -:!redirect: -endif::[] diff --git a/modules/nw-ingress-controller-tls-profiles.adoc b/modules/nw-ingress-controller-tls-profiles.adoc deleted file mode 100644 index 75685f307c80..000000000000 --- a/modules/nw-ingress-controller-tls-profiles.adoc +++ /dev/null @@ -1,61 +0,0 @@ -// Module included in the following assemblies: -// -// * ingress/configure-ingress-operator.adoc - -[id="nw-ingress-controller-tls-profiles_{context}"] -= Ingress Controller TLS profiles - -The `tlsSecurityProfile` parameter defines the schema for a TLS security profile. This object is used by operators to apply TLS security settings to operands. - -There are four TLS security profile types: - -* `Old` -* `Intermediate` -* `Modern` -* `Custom` - -The `Old`, `Intermediate`, and `Modern` profiles are based on link:https://wiki.mozilla.org/Security/Server_Side_TLS#Recommended_configurations[recommended configurations]. The `Custom` profile provides the ability to specify individual TLS security profile parameters. - -.Sample `Old` profile configuration -[source,yaml] ----- -spec: - tlsSecurityProfile: - type: Old ----- - -.Sample `Intermediate` profile configuration -[source,yaml] ----- -spec: - tlsSecurityProfile: - type: Intermediate ----- - -.Sample `Modern` profile configuration -[source,yaml] ----- -spec: - tlsSecurityProfile: - type: Modern ----- - -The `Custom` profile is a user-defined TLS security profile. - -[WARNING] -==== -You must be careful using a `Custom` profile, because invalid configurations can cause problems. -==== - -.Sample `Custom` profile -[source,yaml] ----- -spec: - tlsSecurityProfile: - type: Custom - custom: - ciphers: - - ECDHE-ECDSA-AES128-GCM-SHA256 - - ECDHE-RSA-AES128-GCM-SHA256 - minTLSVersion: VersionTLS11 ----- diff --git a/modules/nw-ingress-select-route.adoc b/modules/nw-ingress-select-route.adoc deleted file mode 100644 index 8e4fa812d6e8..000000000000 --- a/modules/nw-ingress-select-route.adoc +++ /dev/null @@ -1,8 +0,0 @@ -// Module included in the following assemblies: -// -// * ingress/configure-ingress.adoc - -[id="nw-ingress-select-route_{context}"] -= Configure Ingress to use routes - -//PLACEHOLDER diff --git a/modules/nw-ipfailover-configuring-keepalived-multicast.adoc b/modules/nw-ipfailover-configuring-keepalived-multicast.adoc deleted file mode 100644 index 6b92de72cbbb..000000000000 --- a/modules/nw-ipfailover-configuring-keepalived-multicast.adoc +++ /dev/null @@ -1,49 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/configuring-ipfailover.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-ipfailover-configuring-keepalived-multicast_{context}"] -= Configuring Keepalived multicast - -{product-title} IP failover internally uses Keepalived. - -[IMPORTANT] -==== -Ensure that multicast is enabled on the nodes labeled above and they can accept network traffic for `224.0.0.18`, the Virtual Router Redundancy Protocol (VRRP) multicast IP address. -==== - -Before starting the Keepalived daemon, the startup script verifies the `iptables` rule that allows multicast traffic to flow. If there is no such rule, the startup script creates a new rule and adds it to the IP tables configuration. Where this new rule gets added to the IP tables configuration depends on the OPENSHIFT_HA_IPTABLES_CHAIN` variable. If there is an `OPENSHIFT_HA_IPTABLES_CHAIN` variable specified, the rule gets added to the specified chain. Otherwise, the rule is added to the `INPUT` chain. - -[IMPORTANT] -==== -The `iptables` rule must be present whenever there is one or more Keepalived daemon running on the node. -==== - -The `iptables` rule can be removed after the last Keepalived daemon terminates. The rule is not automatically removed. - -.Procedure - -* The `iptables` rule only gets created when it is not already present and the `OPENSHIFT_HA_IPTABLES_CHAIN` variable is specified. You can manually manage the `iptables` rule on each of the nodes if you unset the `OPENSHIFT_HA_IPTABLES_CHAIN` variable: -+ -[IMPORTANT] -==== -You must ensure that the manually added rules persist after a system restart. - -Be careful since every Keepalived daemon uses the VRRP protocol over multicast `224.0.0.18` to negotiate with its peers. There must be a different `VRRP-id`, in the range `0..255`, for each VIP. -==== -+ -[source,terminal] ----- -$ for node in openshift-node-{5,6,7,8,9}; do ssh $node < go-controller/pkg/metrics/master.go - -.Metrics exposed by OVN-Kubernetes -[cols="2a,8a",options="header"] -|=== -|Name |Description - -|`ovnkube_master_pod_creation_latency_seconds` -|The latency between when a pod is created and when the pod is annotated by OVN-Kubernetes. The higher the latency, the more time that elapses before a pod is available for network connectivity. - -|=== - -//// -|`ovnkube_master_nb_e2e_timestamp` -|A timestamp persisted to the OVN (Open Virtual Network) northbound database and updated frequently. -//// diff --git a/modules/nw-ovn-kubernetes-resources-con.adoc b/modules/nw-ovn-kubernetes-resources-con.adoc deleted file mode 100644 index f1d1b4965c15..000000000000 --- a/modules/nw-ovn-kubernetes-resources-con.adoc +++ /dev/null @@ -1,42 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/ovn_kubernetes_network_provider/ovn-kubernetes-architecture.adoc - -:_mod-docs-content-type: PROCEDURE -[id="nw-kubernetes-resources-con_{context}"] -= Resources in the OVN-Kubernetes project - -The OVN-Kubernetes Container Network Interface (CNI) cluster network provider - -.Procedure - -. Run the following command to get all resources in the OVN-Kubernetes project -+ -[source,terminal] ----- -$ oc get all -n openshift-ovn-kubernetes ----- - -.Example output -[source,terminal] ----- -$ NAME READY STATUS RESTARTS AGE -pod/ovnkube-master-cpdxx 6/6 Running 0 157m -pod/ovnkube-master-kcbb5 6/6 Running 0 157m -pod/ovnkube-master-lqhsf 6/6 Running 0 157m -pod/ovnkube-node-2gj7j 5/5 Running 0 147m -pod/ovnkube-node-4kjhv 0/5 ContainerCreating 0 35s -pod/ovnkube-node-f567p 5/5 Running 0 157m -pod/ovnkube-node-lvswl 5/5 Running 0 157m -pod/ovnkube-node-z5dfx 5/5 Running 0 157m -pod/ovnkube-node-zpsn4 5/5 Running 0 134m - -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -service/ovn-kubernetes-master ClusterIP None 9102/TCP 157m -service/ovn-kubernetes-node ClusterIP None 9103/TCP,9105/TCP 157m -service/ovnkube-db ClusterIP None 9641/TCP,9642/TCP 157m - -NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE -daemonset.apps/ovnkube-master 3 3 3 3 3 beta.kubernetes.io/os=linux,node-role.kubernetes.io/master= 157m -daemonset.apps/ovnkube-node 6 6 5 6 5 beta.kubernetes.io/os=linux 157m ----- diff --git a/modules/nw-sriov-add-pod-runtimeconfig.adoc b/modules/nw-sriov-add-pod-runtimeconfig.adoc deleted file mode 100644 index d84d49d91004..000000000000 --- a/modules/nw-sriov-add-pod-runtimeconfig.adoc +++ /dev/null @@ -1,104 +0,0 @@ -// Module included in the following assemblies: -// -// * virt/node_network/virt-configuring-sr-iov-network.adoc -// * virt/vm_networking/virt-connecting-vm-to-sriov.adoc - -// Deprecating in OCP; This is identical in practice to adding a pod -// to an additional network. - -:_mod-docs-content-type: PROCEDURE -[id="nw-sriov-add-pod-runtimeconfig_{context}"] -= Configuring static MAC and IP addresses on additional SR-IOV networks - -You can configure static MAC and IP addresses on an SR-IOV network by specifying Container Network Interface (CNI) `runtimeConfig` data in a pod annotation. - -.Prerequisites - -* Install the OpenShift CLI (`oc`). -* Log in as a user with `cluster-admin` privileges when creating the `SriovNetwork` object. - -.Procedure - -. Create the following `SriovNetwork` object, and then save the YAML in the `-sriov-network.yaml` file. Replace `` with a name for this additional network. -+ -[source,yaml] ----- -apiVersion: sriovnetwork.openshift.io/v1 -kind: SriovNetwork -metadata: - name: <1> - namespace: openshift-sriov-network-operator <2> -spec: - networkNamespace: <3> - ipam: '{ "type": "static" }' <4> - capabilities: '{ "mac": true, "ips": true }' <5> - resourceName: <6> ----- -<1> Replace `` with a name for the object. The SR-IOV Network Operator creates a `NetworkAttachmentDefinition` object with same name. -<2> Specify the namespace where the SR-IOV Network Operator is installed. -<3> Replace `` with the namespace where the `NetworkAttachmentDefinition` object is created. -<4> Specify static type for the ipam CNI plugin as a YAML block scalar. -<5> Specify `mac` and `ips` `capabilities` to `true`. -<6> Replace `` with the value for the `spec.resourceName` parameter from the `SriovNetworkNodePolicy` object that defines the SR-IOV hardware for this additional network. - -. Create the object by running the following command: -+ -[source,terminal] ----- -$ oc create -f <1> ----- -<1> Replace `` with the name of the file you created in the previous step. - -. Optional: Confirm that the NetworkAttachmentDefinition CR associated with the `SriovNetwork` object that you created in the previous step exists by running the following command. Replace `` with the namespace you specified in the `SriovNetwork` object. -+ -[source,terminal] ----- -$ oc get net-attach-def -n ----- - -[NOTE] -===== -Do not modify or delete a `SriovNetwork` custom resource (CR) if it is attached to any pods in the `running` state. -===== - -. Create the following SR-IOV pod spec, and then save the YAML in the `-sriov-pod.yaml` file. Replace `` with a name for this pod. -+ -[source,yaml] ----- -apiVersion: v1 -kind: Pod -metadata: - name: sample-pod - annotations: - k8s.v1.cni.cncf.io/networks: '[ - { - "name": "", <1> - "mac": "20:04:0f:f1:88:01", <2> - "ips": ["192.168.10.1/24", "2001::1/64"] <3> - } -]' -spec: - containers: - - name: sample-container - image: - imagePullPolicy: IfNotPresent - command: ["sleep", "infinity"] ----- -<1> Specify the name of the SR-IOV network attachment definition CR. -<2> Specify the MAC address for the SR-IOV device that is allocated from the resource type defined in the SR-IOV network attachment definition CR. -<3> Specify addresses for the SR-IOV device which is allocated from the resource type defined in the SR-IOV network attachment definition CR. Both IPv4 and IPv6 addresses are supported. - -. Create the sample SR-IOV pod by running the following command: -+ -[source,terminal] ----- -$ oc create -f <1> ----- -<1> Replace `` with the name of the file you created in the previous step. - -. Optional: Confirm that `mac` and `ips` addresses are applied to the SR-IOV device by running the following command. Replace `` with the namespace you specified in the `SriovNetwork` object. -+ -[source,terminal] ----- -$ oc exec sample-pod -n -- ip addr show ----- diff --git a/modules/nw-using-service-external-ip.adoc b/modules/nw-using-service-external-ip.adoc deleted file mode 100644 index ad18f1bf02d3..000000000000 --- a/modules/nw-using-service-external-ip.adoc +++ /dev/null @@ -1,22 +0,0 @@ -// Module included in the following assemblies: -// -// * networking/configuring_ingress_cluster_traffic/configuring-ingress-cluster-traffic-service-external-ip.adoc - -[id="nw-service-external-ip_{context}"] -= Using a service external IP to get traffic into the cluster - -One method to expose a service is to assign an external IP address directly to -the service you want to make accessible from outside the cluster. - -The external IP address that you use must be provisioned on your infrastructure -platform and attached to a cluster node. - -With an external IP on the service, {product-title} sets up NAT rules to -allow traffic arriving at any cluster node attached to that IP address to be -sent to one of the internal pods. This is similar to the internal -service IP addresses, but the external IP tells {product-title} that this -service should also be exposed externally at the given IP. The administrator -must assign the IP address to a host (node) interface on one of the nodes in the -cluster. Alternatively, the address can be used as a virtual IP (VIP). - -These IP addresses are not managed by {product-title}. The cluster administrator is responsible for ensuring that traffic arrives at a node with this IP address. diff --git a/modules/pruning-images-job-cronjob.adoc b/modules/pruning-images-job-cronjob.adoc deleted file mode 100644 index 7e89fc5a4f03..000000000000 --- a/modules/pruning-images-job-cronjob.adoc +++ /dev/null @@ -1,172 +0,0 @@ -// Module included in the following assemblies: -// -// * applications/pruning-objects.adoc - -:_mod-docs-content-type: PROCEDURE -[id="pruning-images-job-cronjob_{context}"] -= Running image pruning as a Job or CronJob - -You can configure image pruning to run on a schedule by creating a `Job` or `CronJob` that invokes the pruning operation on the cluster. -This approach allows administrators to automate pruning at regular intervals without relying on the image pruning custom resource. - - -.Prerequisites - -* To prune images, you must first log in to the CLI as a user with an access token. The user must also have the `system:image-pruner` cluster role or greater (for example, `cluster-admin`). -* Expose the image registry. - -.Procedure - -To manually prune images that are no longer required by the system due to age, status, or exceed limits, use one of the following methods: - -* Run image pruning as a `Job` or `CronJob` on the cluster by creating a YAML file for the `pruner` service account, for example: -+ -[source,terminal] ----- -$ oc create -f .yaml ----- -+ -.Example output -+ -[source,yaml] ----- -kind: List -apiVersion: v1 -items: -- apiVersion: v1 - kind: ServiceAccount - metadata: - name: pruner - namespace: openshift-image-registry -- apiVersion: rbac.authorization.k8s.io/v1 - kind: ClusterRoleBinding - metadata: - name: openshift-image-registry-pruner - roleRef: - apiGroup: rbac.authorization.k8s.io - kind: ClusterRole - name: system:image-pruner - subjects: - - kind: ServiceAccount - name: pruner - namespace: openshift-image-registry -- apiVersion: batch/v1 - kind: CronJob - metadata: - name: image-pruner - namespace: openshift-image-registry - spec: - schedule: "0 0 * * *" - concurrencyPolicy: Forbid - successfulJobsHistoryLimit: 1 - failedJobsHistoryLimit: 3 - jobTemplate: - spec: - template: - spec: - restartPolicy: OnFailure - containers: - - image: "quay.io/openshift/origin-cli:4.1" - resources: - requests: - cpu: 1 - memory: 1Gi - terminationMessagePolicy: FallbackToLogsOnError - command: - - oc - args: - - adm - - prune - - images - - --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt - - --keep-tag-revisions=5 - - --keep-younger-than=96h - - --confirm=true - name: image-pruner - serviceAccountName: pruner ----- - -* Run the `oc adm prune images []` command: -+ -[source,terminal] ----- -$ oc adm prune images [] ----- -+ -Pruning images removes data from the integrated registry unless `--prune-registry=false` is used. -+ -Pruning images with the `--namespace` flag does not remove images, only image streams. Images are non-namespaced resources. Therefore, limiting pruning to a particular namespace makes it impossible to calculate its current usage. -+ -By default, the integrated registry caches metadata of blobs to reduce the number of requests to storage, and to increase the request-processing speed. Pruning does not update the integrated registry cache. Images that still contain pruned layers after pruning will be broken because the pruned layers that have metadata in the cache will not be pushed. Therefore, you must redeploy the registry to clear the cache after pruning: -+ -[source,terminal] ----- -$ oc rollout restart deployment/image-registry -n openshift-image-registry ----- -+ -If the integrated registry uses a Redis cache, you must clean the database manually. -+ -If redeploying the registry after pruning is not an option, then you must permanently disable the cache. -+ -`oc adm prune images` operations require a route for your registry. Registry routes are not created by default. -+ -The *Prune images CLI configuration options* table describes the options you can use with the `oc adm prune images ` command. -+ -.Prune images CLI configuration options -[cols="4,8",options="header"] -|=== - -|Option |Description - -.^|`--all` -|Include images that were not pushed to the registry, but have been mirrored by -pullthrough. This is on by default. To limit the pruning to images that were -pushed to the integrated registry, pass `--all=false`. - -.^|`--certificate-authority` -|The path to a certificate authority file to use when communicating with the -{product-title}-managed registries. Defaults to the certificate authority data -from the current user's configuration file. If provided, a secure connection is -initiated. - -.^|`--confirm` -|Indicate that pruning should occur, instead of performing a test-run. This -requires a valid route to the integrated container image registry. If this -command is run outside of the cluster network, the route must be provided -using `--registry-url`. - -.^|`--force-insecure` -|Use caution with this option. Allow an insecure connection to the container -registry that is hosted via HTTP or has an invalid HTTPS certificate. - -.^|`--keep-tag-revisions=` -|For each imagestream, keep up to at most `N` image revisions per tag (default -`3`). - -.^|`--keep-younger-than=` -|Do not prune any image that is younger than `` relative to the -current time. Alternately, do not prune any image that is referenced by any other object that -is younger than `` relative to the current time (default `60m`). - -.^|`--prune-over-size-limit` -|Prune each image that exceeds the smallest limit defined in the same project. -This flag cannot be combined with `--keep-tag-revisions` nor -`--keep-younger-than`. - -.^|`--registry-url` -|The address to use when contacting the registry. The command attempts to use a -cluster-internal URL determined from managed images and image streams. In case -it fails (the registry cannot be resolved or reached), an alternative route that -works needs to be provided using this flag. The registry hostname can be -prefixed by `https://` or `http://`, which enforces particular connection -protocol. - -.^|`--prune-registry` -|In conjunction with the conditions stipulated by the other options, this option -controls whether the data in the registry corresponding to the {product-title} -image API object is pruned. By default, image pruning processes both the image -API objects and corresponding data in the registry. - -This option is useful when you are only concerned with removing etcd content, to reduce the number of image objects but are not concerned with cleaning up registry storage, or if you intend to do that separately by hard pruning the registry during an appropriate maintenance window for the registry. -|=== - diff --git a/modules/running-cluster-loader.adoc b/modules/running-cluster-loader.adoc deleted file mode 100644 index ad2d4aac32ed..000000000000 --- a/modules/running-cluster-loader.adoc +++ /dev/null @@ -1,46 +0,0 @@ -// Module included in the following assemblies: -// -// scalability_and_performance/using-cluster-loader.adoc - -:_mod-docs-content-type: PROCEDURE -[id="running_cluster_loader_{context}"] -= Running Cluster Loader - -.Prerequisites - -* The repository will prompt you to authenticate. The registry credentials allow -you to access the image, which is not publicly available. Use your existing -authentication credentials from installation. - -.Procedure - -. Execute Cluster Loader using the built-in test configuration, which deploys five -template builds and waits for them to complete: -+ -[source,terminal] ----- -$ podman run -v ${LOCAL_KUBECONFIG}:/root/.kube/config:z -i \ -quay.io/openshift/origin-tests:4.9 /bin/bash -c 'export KUBECONFIG=/root/.kube/config && \ -openshift-tests run-test "[sig-scalability][Feature:Performance] Load cluster \ -should populate the cluster [Slow][Serial] [Suite:openshift]"' ----- -+ -Alternatively, execute Cluster Loader with a user-defined configuration by -setting the environment variable for `VIPERCONFIG`: -+ -[source,terminal] ----- -$ podman run -v ${LOCAL_KUBECONFIG}:/root/.kube/config:z \ --v ${LOCAL_CONFIG_FILE_PATH}:/root/configs/:z \ --i quay.io/openshift/origin-tests:4.9 \ -/bin/bash -c 'KUBECONFIG=/root/.kube/config VIPERCONFIG=/root/configs/test.yaml \ -openshift-tests run-test "[sig-scalability][Feature:Performance] Load cluster \ -should populate the cluster [Slow][Serial] [Suite:openshift]"' ----- -+ -In this example, `${LOCAL_KUBECONFIG}` refers to the path to the `kubeconfig` on -your local file system. Also, there is a directory called -`${LOCAL_CONFIG_FILE_PATH}`, which is mounted into the container that contains a -configuration file called `test.yaml`. Additionally, if the `test.yaml` -references any external template files or podspec files, they should also be -mounted into the container.