diff --git a/modules/virt-NUMA-prereqs.adoc b/modules/virt-NUMA-prereqs.adoc index b4370b3bcd3c..6ef5d78d5d3c 100644 --- a/modules/virt-NUMA-prereqs.adoc +++ b/modules/virt-NUMA-prereqs.adoc @@ -12,7 +12,8 @@ Before you can enable NUMA functionality with {VirtProductName} VMs, you must en * Worker nodes must have huge pages enabled. * The `KubeletConfig` object on worker nodes must be configured with the `cpuManagerPolicy: static` spec to guarantee dedicated CPU allocation, which is a prerequisite for NUMA pinning. + -.Example `cpuManagerPolicy: static` spec +Example `cpuManagerPolicy: static` spec: ++ [source,yaml] ---- apiVersion: machineconfiguration.openshift.io/v1 diff --git a/modules/virt-about-aaq-operator.adoc b/modules/virt-about-aaq-operator.adoc index 2c2eb4f74b58..e267e2ce580f 100644 --- a/modules/virt-about-aaq-operator.adoc +++ b/modules/virt-about-aaq-operator.adoc @@ -6,6 +6,7 @@ [id="virt-about-aaq-operator_{context}"] = About the AAQ Operator +[role="_abstract"] The Application-Aware Quota (AAQ) Operator provides more flexible and extensible quota management compared to the native `ResourceQuota` object in the {product-title} platform. In a multi-tenant cluster environment, where multiple workloads operate on shared infrastructure and resources, using the Kubernetes native `ResourceQuota` object to limit aggregate CPU and memory consumption presents infrastructure overhead and live migration challenges for {VirtProductName} workloads. @@ -21,7 +22,8 @@ The AAQ Operator introduces two new API objects defined as custom resource defin * `ApplicationAwareResourceQuota`: Sets aggregate quota restrictions enforced per namespace. The `ApplicationAwareResourceQuota` API is compatible with the native `ResourceQuota` object and shares the same specification and status definitions. + -.Example manifest +Example manifest: ++ [source,yaml] ---- apiVersion: aaq.kubevirt.io/v1alpha1 @@ -41,7 +43,8 @@ spec: * `ApplicationAwareClusterResourceQuota`: Mirrors the `ApplicationAwareResourceQuota` object at a cluster scope. It is compatible with the native `ClusterResourceQuota` API object and shares the same specification and status definitions. When creating an AAQ cluster quota, you can select multiple namespaces based on annotation selection, label selection, or both by editing the `spec.selector.labels` or `spec.selector.annotations` fields. + -.Example manifest +Example manifest: ++ [source,yaml] ---- apiVersion: aaq.kubevirt.io/v1alpha1 diff --git a/modules/virt-about-application-consistent-backups.adoc b/modules/virt-about-application-consistent-backups.adoc index 4f0b9cf71ddf..114fa6bfa83f 100644 --- a/modules/virt-about-application-consistent-backups.adoc +++ b/modules/virt-about-application-consistent-backups.adoc @@ -6,8 +6,9 @@ [id="virt-about-application-consistent-backups_{context}"] = About application-consistent snapshots and backups -You can configure application-consistent snapshots and backups for Linux or Windows virtual machines (VMs) through a cycle of freezing and thawing. For any application, you can either configure a script on a Linux VM or register on a Windows VM to be notified when a snapshot or backup is due to begin. +[role="_abstract"] +You can configure application-consistent snapshots and backups for Linux or Windows virtual machines (VMs) through a cycle of freezing and thawing. For any application, you can configure a script on a Linux VM or register on a Windows VM to be notified when a snapshot or backup is due to begin. On a Linux VM, freeze and thaw processes trigger automatically when a snapshot is taken or a backup is started by using, for example, a plugin from Velero or another backup vendor. The freeze process, performed by QEMU Guest Agent (QEMU GA) freeze hooks, ensures that before the snapshot or backup of a VM occurs, all of the VM's filesystems are frozen and each appropriately configured application is informed that a snapshot or backup is about to start. This notification affords each application the opportunity to quiesce its state. Depending on the application, quiescing might involve temporarily refusing new requests, finishing in-progress operations, and flushing data to disk. The operating system is then directed to quiesce the filesystems by flushing outstanding writes to disk and freezing new write activity. All new connection requests are refused. When all applications have become inactive, the QEMU GA freezes the filesystems, and a snapshot is taken or a backup initiated. After the taking of the snapshot or start of the backup, the thawing process begins. Filesystems writing is reactivated and applications receive notification to resume normal operations. -The same cycle of freezing and thawing is available on a Windows VM. Applications register with the Volume Shadow Copy Service (VSS) to receive notifications that they should flush out their data because a backup or snapshot is imminent. Thawing of the applications after the backup or snapshot is complete returns them to an active state. For more details, see the Windows Server documentation about the Volume Shadow Copy Service. \ No newline at end of file +The same cycle of freezing and thawing is available on a Windows VM. Applications register with the Volume Shadow Copy Service (VSS) to receive notifications that they should flush out their data because a backup or snapshot is imminent. Thawing of the applications after the backup or snapshot is complete returns them to an active state. For more details, see the Windows Server documentation about the Volume Shadow Copy Service. diff --git a/modules/virt-about-auto-bootsource-updates.adoc b/modules/virt-about-auto-bootsource-updates.adoc index 948962070e28..5fc766da1164 100644 --- a/modules/virt-about-auto-bootsource-updates.adoc +++ b/modules/virt-about-auto-bootsource-updates.adoc @@ -7,12 +7,13 @@ [id="virt-about-auto-bootsource-updates_{context}"] = About automatic boot source updates -Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates the _system-defined_ boot sources that {VirtProductName} provides. +[role="_abstract"] +Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. -You can opt out of automatic updates for all system-defined boot sources by disabling the `enableCommonBootImageImport` feature gate. If you disable this feature gate, all `DataImportCron` objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually. +By default, CDI automatically updates the _system-defined_ boot sources that {VirtProductName} provides. You can opt out of automatic updates for all system-defined boot sources by disabling the `enableCommonBootImageImport` feature gate. If you disable this feature gate, all `DataImportCron` objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually. When the `enableCommonBootImageImport` feature gate is disabled, `DataSource` objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by populating a PVC with an operating system, optionally creating a volume snapshot from the PVC, and then referring to the PVC or volume snapshot from the `DataSource` object. _Custom_ boot sources that are not provided by {VirtProductName} are not controlled by the feature gate. You must manage them individually by editing the `HyperConverged` custom resource (CR). You can also use this method to manage individual system-defined boot sources. -Cluster administrators can enable automatic subscription for {op-system-base-full} virtual machines in the {product-title} web console. \ No newline at end of file +Cluster administrators can enable automatic subscription for {op-system-base-full} virtual machines in the {product-title} web console. diff --git a/modules/virt-about-block-pvs.adoc b/modules/virt-about-block-pvs.adoc index 9be4dafc8acc..3fc9a323c221 100644 --- a/modules/virt-about-block-pvs.adoc +++ b/modules/virt-about-block-pvs.adoc @@ -8,6 +8,7 @@ [id="virt-about-block-pvs_{context}"] = About block persistent volumes +[role="_abstract"] A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes do not have a file system and can provide performance benefits for virtual machines by reducing overhead. diff --git a/modules/virt-about-cdi-operator.adoc b/modules/virt-about-cdi-operator.adoc index 54e4a1a787c3..38be5319607c 100644 --- a/modules/virt-about-cdi-operator.adoc +++ b/modules/virt-about-cdi-operator.adoc @@ -6,6 +6,7 @@ [id="virt-about-cdi-operator_{context}"] = About the Containerized Data Importer (CDI) Operator +[role="_abstract"] The CDI Operator, `cdi-operator`, manages CDI and its related resources, which imports a virtual machine (VM) image into a persistent volume claim (PVC) by using a data volume. image::cnv_components_cdi-operator.png[cdi-operator components] diff --git a/modules/virt-about-changing-removing-mediated-devices.adoc b/modules/virt-about-changing-removing-mediated-devices.adoc index 3624b0342f69..c81495e5375f 100644 --- a/modules/virt-about-changing-removing-mediated-devices.adoc +++ b/modules/virt-about-changing-removing-mediated-devices.adoc @@ -6,6 +6,9 @@ [id="about-changing-removing-mediated-devices_{context}"] = About changing and removing mediated devices +[role="_abstract"] +As an administrator, you can change or remove mediated devices by editing the `HyperConverged` custom resource (CR). + You can reconfigure or remove mediated devices in several ways: * Edit the `HyperConverged` CR and change the contents of the `mediatedDeviceTypes` stanza. @@ -17,4 +20,4 @@ You can reconfigure or remove mediated devices in several ways: [NOTE] ==== If you remove the device information from the `spec.permittedHostDevices` stanza without also removing it from the `spec.mediatedDevicesConfiguration` stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas. -==== \ No newline at end of file +==== diff --git a/modules/virt-about-cloning.adoc b/modules/virt-about-cloning.adoc index d66786549aa4..264d2c10b73c 100644 --- a/modules/virt-about-cloning.adoc +++ b/modules/virt-about-cloning.adoc @@ -6,12 +6,10 @@ [id="virt-about-cloning_{context}"] = About cloning -When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the following Container Storage Interface (CSI) clone methods: +[role="_abstract"] +When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the Container Storage Interface (CSI) clone methods: CSI volume cloning or smart cloning. Both methods are efficient but have certain requirements. If the requirements are not met, the CDI uses host-assisted cloning. -* CSI volume cloning -* Smart cloning - -Both CSI volume cloning and smart cloning methods are efficient, but they have certain requirements for use. If the requirements are not met, the CDI uses host-assisted cloning. Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods. +Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods. [id="csi-volume-cloning_{context}"] == CSI volume cloning @@ -47,7 +45,7 @@ When the requirements for neither Container Storage Interface (CSI) volume cloni Host-assisted cloning uses a source pod and a target pod to copy data from the source volume to the target volume. The target persistent volume claim (PVC) is annotated with the fallback reason that explains why host-assisted cloning has been used, and an event is created. -.Example PVC target annotation +Example PVC target annotation: [source,yaml] ---- @@ -60,7 +58,7 @@ metadata: cdi.kubevirt.io/cloneType: copy ---- -.Example event +Example event: [source,terminal] ---- diff --git a/modules/virt-about-cluster-network-addons-operator.adoc b/modules/virt-about-cluster-network-addons-operator.adoc index 2bf7fee3f77d..2749a261b6b2 100644 --- a/modules/virt-about-cluster-network-addons-operator.adoc +++ b/modules/virt-about-cluster-network-addons-operator.adoc @@ -6,6 +6,7 @@ [id="virt-about-cluster-network-addons-operator_{context}"] = About the Cluster Network Addons Operator +[role="_abstract"] The Cluster Network Addons Operator, `cluster-network-addons-operator`, deploys networking components on a cluster and manages the related resources for extended network functionality. image::cnv_components_cluster-network-addons-operator.png[cluster-network-addons-operator components] diff --git a/modules/virt-about-control-plane-only-updates.adoc b/modules/virt-about-control-plane-only-updates.adoc index 0c1b60dd8424..b91db062d96c 100644 --- a/modules/virt-about-control-plane-only-updates.adoc +++ b/modules/virt-about-control-plane-only-updates.adoc @@ -6,7 +6,10 @@ [id="virt-about-control-plane-only-updates_{context}"] = Control Plane Only updates -Every even-numbered minor version of {product-title} is an Extended Update Support (EUS) version. However, Kubernetes design mandates serial minor version updates, so you cannot directly update from one EUS version to the next. An EUS-to-EUS update starts with updating {VirtProductName} to the latest z-stream of the next odd-numbered minor version. Next, update {product-title} to the target EUS version. When the {product-title} update succeeds, the corresponding update for {VirtProductName} becomes available. You can now update {VirtProductName} to the target EUS version. +[role="_abstract"] +Every even-numbered minor version of {product-title} is an Extended Update Support (EUS) version. However, Kubernetes design mandates serial minor version updates, so you cannot directly update from one EUS version to the next. + +An EUS-to-EUS update starts with updating {VirtProductName} to the latest z-stream of the next odd-numbered minor version. Next, update {product-title} to the target EUS version. When the {product-title} update succeeds, the corresponding update for {VirtProductName} becomes available. You can now update {VirtProductName} to the target EUS version. [NOTE] ==== @@ -29,4 +32,4 @@ Before beginning a Control Plane Only update, you must: By default, {VirtProductName} automatically updates workloads, such as the `virt-launcher` pod, when you update the {VirtProductName} Operator. You can configure this behavior in the `spec.workloadUpdateStrategy` stanza of the `HyperConverged` custom resource. ==== -// link to EUS to EUS docs in assembly due to module limitations \ No newline at end of file +// link to EUS to EUS docs in assembly due to module limitations diff --git a/modules/virt-about-cpu-and-memory-quota-namespace.adoc b/modules/virt-about-cpu-and-memory-quota-namespace.adoc index 3e6b3b51ee1b..23e6703c60a3 100644 --- a/modules/virt-about-cpu-and-memory-quota-namespace.adoc +++ b/modules/virt-about-cpu-and-memory-quota-namespace.adoc @@ -6,6 +6,7 @@ [id="virt-about-cpu-and-memory-quota-namespace_{context}"] = About CPU and memory quotas in a namespace +[role="_abstract"] A _resource quota_, defined by the `ResourceQuota` object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace. -The `HyperConverged` custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of `0`. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota. \ No newline at end of file +The `HyperConverged` custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of `0`. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota. diff --git a/modules/virt-about-creating-storage-classes.adoc b/modules/virt-about-creating-storage-classes.adoc index 5f560c9ca406..977889bb2c78 100644 --- a/modules/virt-about-creating-storage-classes.adoc +++ b/modules/virt-about-creating-storage-classes.adoc @@ -6,6 +6,7 @@ [id="virt-about-creating-storage-classes_{context}"] = About creating storage classes +[role="_abstract"] When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a `StorageClass` object's parameters after you create it. In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the `storagePools` stanza. @@ -15,4 +16,4 @@ In order to use the hostpath provisioner (HPP) you must create an associated sto Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned. To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the `StorageClass` value with `volumeBindingMode` parameter set to `WaitForFirstConsumer`, the binding and provisioning of the PV is delayed until a pod is created using the PVC. -==== \ No newline at end of file +==== diff --git a/modules/virt-about-datavolumes.adoc b/modules/virt-about-datavolumes.adoc index af116cbcc373..67754c74067e 100644 --- a/modules/virt-about-datavolumes.adoc +++ b/modules/virt-about-datavolumes.adoc @@ -8,7 +8,10 @@ [id="virt-about-datavolumes_{context}"] = About data volumes -`DataVolume` objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the `dataVolumeTemplate` field in the virtual machine (VM) specification. +[role="_abstract"] +`DataVolume` objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). + +You can create a data volume as either a standalone resource or by using the `dataVolumeTemplate` field in the virtual machine (VM) specification. [NOTE] ==== diff --git a/modules/virt-about-dedicated-resources.adoc b/modules/virt-about-dedicated-resources.adoc index 1e1f5df3f040..36d800eb19c9 100644 --- a/modules/virt-about-dedicated-resources.adoc +++ b/modules/virt-about-dedicated-resources.adoc @@ -7,7 +7,10 @@ = About dedicated resources +[role="_abstract"] When you enable dedicated resources for your virtual machine, your virtual machine's workload is scheduled on CPUs that will not be used by other -processes. By using dedicated resources, you can improve the performance of the +processes. + +By using dedicated resources, you can improve the performance of the virtual machine and the accuracy of latency predictions. diff --git a/modules/virt-about-dr-methods.adoc b/modules/virt-about-dr-methods.adoc index 3817b5fcedc8..25fe442fdde3 100644 --- a/modules/virt-about-dr-methods.adoc +++ b/modules/virt-about-dr-methods.adoc @@ -6,10 +6,11 @@ [id="virt-about-dr-methods_{context}"] = About disaster recovery methods -For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the link:https://access.redhat.com/articles/7041594[Red{nbsp}Hat {VirtProductName} disaster recovery guide] in the Red{nbsp}Hat Knowledgebase. - +[role="_abstract"] The two primary DR methods for {VirtProductName} are Metropolitan Disaster Recovery (Metro-DR) and Regional-DR. +For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the link:https://access.redhat.com/articles/7041594[Red{nbsp}Hat {VirtProductName} disaster recovery guide] in the Red{nbsp}Hat Knowledgebase. + [id="metro-dr_{context}"] == Metro-DR @@ -18,4 +19,4 @@ Metro-DR uses synchronous replication. It writes to storage at both the primary [id="regional-dr_{context}"] == Regional-DR -Regional-DR uses asynchronous replication. The data in the primary site is synchronized with the secondary site at regular intervals. For this type of replication, you can have a higher latency connection between the primary and secondary sites. \ No newline at end of file +Regional-DR uses asynchronous replication. The data in the primary site is synchronized with the secondary site at regular intervals. For this type of replication, you can have a higher latency connection between the primary and secondary sites. diff --git a/modules/virt-about-dv-conditions-and-events.adoc b/modules/virt-about-dv-conditions-and-events.adoc index 132d6a06cd5e..13e9c3b42ae6 100644 --- a/modules/virt-about-dv-conditions-and-events.adoc +++ b/modules/virt-about-dv-conditions-and-events.adoc @@ -6,8 +6,10 @@ [id="virt-about-dv-conditions-and-events_{context}"] = About data volume conditions and events -You can diagnose data volume issues by examining the output of the `Conditions` and `Events` sections -generated by the command: +[role="_abstract"] +You can diagnose data volume issues by examining the `Conditions` and `Events` sections of the `oc describe` command output. + +Run the following command to inspect the data volume: [source,terminal] ---- diff --git a/modules/virt-about-fusion-access-san.adoc b/modules/virt-about-fusion-access-san.adoc index 95adbe5b69a6..9d003df2aad1 100644 --- a/modules/virt-about-fusion-access-san.adoc +++ b/modules/virt-about-fusion-access-san.adoc @@ -6,7 +6,8 @@ [id="about-fusion-access-san_{context}"] = About {IBMFusionFirst} -{IBMFusionFirst} is a solution that provides a scalable clustered file system for enterprise storage, primarily designed to offer access to consolidated, block-level data storage. It presents storage devices, such as disk arrays, to the operating system as if they were direct-attached storage. +[role="_abstract"] +{IBMFusionFirst} provides a scalable clustered file system for enterprise storage, primarily designed to offer access to consolidated, block-level data storage. It presents storage devices, such as disk arrays, to the operating system as if they were direct-attached storage. This solution is particularly geared towards enterprise storage for {VirtProductName} and leverages existing Storage Area Network (SAN) infrastructure. A SAN is a dedicated network of storage devices that is typically not accessible through the local area network (LAN). diff --git a/modules/virt-about-hco-operator.adoc b/modules/virt-about-hco-operator.adoc index 176a4cda28cb..a773b8801349 100644 --- a/modules/virt-about-hco-operator.adoc +++ b/modules/virt-about-hco-operator.adoc @@ -6,6 +6,7 @@ [id="virt-about-hco-operator_{context}"] = About the HyperConverged Operator (HCO) +[role="_abstract"] The HCO, `hco-operator`, provides a single entry point for deploying and managing {VirtProductName} and several helper operators with opinionated defaults. It also creates custom resources (CRs) for those operators. image::cnv_components_hco-operator.png[hco-operator components] diff --git a/modules/virt-about-hpp-operator.adoc b/modules/virt-about-hpp-operator.adoc index 0986a89e8f75..3de426143d25 100644 --- a/modules/virt-about-hpp-operator.adoc +++ b/modules/virt-about-hpp-operator.adoc @@ -6,6 +6,7 @@ [id="virt-about-hpp-operator_{context}"] = About the Hostpath Provisioner (HPP) Operator +[role="_abstract"] The HPP Operator, `hostpath-provisioner-operator`, deploys and manages the multi-node HPP and related resources. image::cnv_components_hpp-operator.png[hpp-operator components] diff --git a/modules/virt-about-instance-types.adoc b/modules/virt-about-instance-types.adoc index f3487ba92599..3b80adec7ab0 100644 --- a/modules/virt-about-instance-types.adoc +++ b/modules/virt-about-instance-types.adoc @@ -6,6 +6,7 @@ [id="virt-about-instance-types_{context}"] = About instance types +[role="_abstract"] An instance type is a reusable object where you can define resources and characteristics to apply to new VMs. You can define custom instance types or use the variety that are included when you install {VirtProductName}. To create a new instance type, you must first create a manifest, either manually or by using the `virtctl` CLI tool. You then create the instance type object by applying the manifest to your cluster. @@ -31,7 +32,6 @@ Because instance types require defined CPU and memory attributes, {VirtProductNa You can manually create an instance type manifest. For example: -.Example YAML file with required fields [source,yaml] ---- apiVersion: instancetype.kubevirt.io/v1beta1 @@ -49,7 +49,6 @@ spec: You can create an instance type manifest by using the `virtctl` CLI utility. For example: -.Example `virtctl` command with required fields [source,terminal] ---- $ virtctl create instancetype --cpu 2 --memory 256Mi diff --git a/modules/virt-about-ksm.adoc b/modules/virt-about-ksm.adoc index c89201f5e3fb..6c76f0401ebc 100644 --- a/modules/virt-about-ksm.adoc +++ b/modules/virt-about-ksm.adoc @@ -7,6 +7,7 @@ [id="virt-about-ksm_{context}"] = About using {VirtProductName} to activate KSM +[role="_abstract"] You can configure {VirtProductName} to activate kernel samepage merging (KSM) when nodes experience memory overload. [id="virt-ksm-configuration-methods"] @@ -14,18 +15,19 @@ You can configure {VirtProductName} to activate kernel samepage merging (KSM) wh You can enable or disable the KSM activation feature for all nodes by using the {product-title} web console or by editing the `HyperConverged` custom resource (CR). The `HyperConverged` CR supports more granular configuration. -[discrete] [id="virt-ksm-cr-configuration"] -=== CR configuration - +CR configuration:: ++ You can configure the KSM activation feature by editing the `spec.configuration.ksmConfiguration` stanza of the `HyperConverged` CR. - ++ +-- * You enable the feature and configure settings by editing the `ksmConfiguration` stanza. * You disable the feature by deleting the `ksmConfiguration` stanza. * You can allow {VirtProductName} to enable KSM on only a subset of nodes by adding node selection syntax to the `ksmConfiguration.nodeLabelSelector` field. - +-- ++ [NOTE] ==== Even if the KSM activation feature is disabled in {VirtProductName}, an administrator can still enable KSM on nodes that support it. diff --git a/modules/virt-about-libguestfs-tools-virtctl-guestfs.adoc b/modules/virt-about-libguestfs-tools-virtctl-guestfs.adoc index 2776abfaa084..98b50dae30bc 100644 --- a/modules/virt-about-libguestfs-tools-virtctl-guestfs.adoc +++ b/modules/virt-about-libguestfs-tools-virtctl-guestfs.adoc @@ -6,6 +6,7 @@ [id="virt-about-libguestfs-tools-virtctl-guestfs_{context}"] = Libguestfs and virtctl guestfs commands +[role="_abstract"] `Libguestfs` tools help you access and modify virtual machine (VM) disk images. You can use `libguestfs` tools to view and edit files in a guest, clone and build virtual machines, and format and resize disks. You can also use the `virtctl guestfs` command and its sub-commands to modify, inspect, and debug VM disks on a PVC. To see a complete list of possible sub-commands, enter `virt-` on the command line and press the Tab key. For example: diff --git a/modules/virt-about-live-migration-permissions.adoc b/modules/virt-about-live-migration-permissions.adoc index 6d759739aa19..3b98f1f63bb3 100644 --- a/modules/virt-about-live-migration-permissions.adoc +++ b/modules/virt-about-live-migration-permissions.adoc @@ -6,9 +6,10 @@ [id="virt-about-live-migration-permissions_{context}"] = About live migration permissions -In {VirtProductName} 4.19 and later, live migration operations are restricted to users who are explicitly granted the `kubevirt.io:migrate` cluster role. Users with this role can create, delete, and update virtual machine (VM) live migration requests, which are represented by `VirtualMachineInstanceMigration` (VMIM) custom resources. +[role="_abstract"] +In {VirtProductName} 4.19 and later, live migration operations are restricted to users who are explicitly granted the `kubevirt.io:migrate` cluster role. Users with this role can create, delete, and update virtual machine (VM) live migration requests. -Cluster administrators can bind the `kubevirt.io:migrate` role to trusted users or groups at either the namespace or cluster level. +The live migration requests are represented by `VirtualMachineInstanceMigration` (VMIM) custom resources. Cluster administrators can bind the `kubevirt.io:migrate` role to trusted users or groups at either the namespace or cluster level. Before {VirtProductName} 4.19, namespace administrators had live migration permissions by default. This behavior changed in version 4.19 to prevent unintended or malicious disruptions to infrastructure-critical migration operations. diff --git a/modules/virt-about-nmstate.adoc b/modules/virt-about-nmstate.adoc index f4f5b53ff2c1..59594fe56152 100644 --- a/modules/virt-about-nmstate.adoc +++ b/modules/virt-about-nmstate.adoc @@ -7,6 +7,7 @@ [id="virt-about-nmstate_{context}"] = About nmstate +[role="_abstract"] {VirtProductName} uses link:https://nmstate.github.io/[`nmstate`] to report on and configure the state of the node network. This makes it possible to modify network policy configuration, such as by creating a Linux bridge on all nodes, by applying a single configuration manifest to the cluster. Node networking is monitored and updated by the following objects: diff --git a/modules/virt-about-node-labeling-obsolete-cpu-models.adoc b/modules/virt-about-node-labeling-obsolete-cpu-models.adoc index 6543aaec13b2..ed114e2d1495 100644 --- a/modules/virt-about-node-labeling-obsolete-cpu-models.adoc +++ b/modules/virt-about-node-labeling-obsolete-cpu-models.adoc @@ -5,6 +5,7 @@ [id="virt-about-node-labeling-obsolete-cpu-models_{context}"] = About node labeling for obsolete CPU models +[role="_abstract"] The {VirtProductName} Operator uses a predefined list of obsolete CPU models to ensure that a node supports only valid CPU models for scheduled VMs. By default, the following CPU models are eliminated from the list of labels generated for the node: @@ -31,4 +32,4 @@ qemu64 ---- ==== -This predefined list is not visible in the `HyperConverged` CR. You cannot _remove_ CPU models from this list, but you can add to the list by editing the `spec.obsoleteCPUs.cpuModels` field of the `HyperConverged` CR. \ No newline at end of file +This predefined list is not visible in the `HyperConverged` CR. You cannot _remove_ CPU models from this list, but you can add to the list by editing the `spec.obsoleteCPUs.cpuModels` field of the `HyperConverged` CR. diff --git a/modules/virt-about-node-placement-virt-components.adoc b/modules/virt-about-node-placement-virt-components.adoc index 96697188092e..c6aa89bd0a1c 100644 --- a/modules/virt-about-node-placement-virt-components.adoc +++ b/modules/virt-about-node-placement-virt-components.adoc @@ -6,11 +6,8 @@ [id="virt-about-node-placement-virt-components_{context}"] = About node placement rules for {VirtProductName} components -You can use node placement rules for the following tasks: - -* Deploy virtual machines only on nodes intended for virtualization workloads. -* Deploy Operators only on infrastructure nodes. -* Maintain separation between workloads. +[role="_abstract"] +You can use node placement rules to deploy virtual machines only on nodes intended for virtualization workloads, to deploy Operators only on infrastructure nodes, or to maintain separation between workloads. Depending on the object, you can use one or more of the following rule types: diff --git a/modules/virt-about-node-placement-virtualization-components.adoc b/modules/virt-about-node-placement-virtualization-components.adoc index 1f05c8294ebb..886b142928ac 100644 --- a/modules/virt-about-node-placement-virtualization-components.adoc +++ b/modules/virt-about-node-placement-virtualization-components.adoc @@ -6,6 +6,9 @@ [id="virt-about-node-placement-virtualization-components_{context}"] = About node placement for virtualization components +[role="_abstract"] +You can customize where {VirtProductName} deploys its components by applying node placement rules. + You might want to customize where {VirtProductName} deploys its components to ensure that: * Virtual machines only deploy on nodes that are intended for virtualization workloads. diff --git a/modules/virt-about-node-placement-vms.adoc b/modules/virt-about-node-placement-vms.adoc index 23f9dbf0bac8..b567f37de50f 100644 --- a/modules/virt-about-node-placement-vms.adoc +++ b/modules/virt-about-node-placement-vms.adoc @@ -6,7 +6,10 @@ [id="virt-about-node-placement-vms_{context}"] = About node placement for virtual machines -To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. You might want to do this if: +[role="_abstract"] +To ensure that virtual machines (VMs) run on appropriate nodes, you can configure node placement rules. + +You might want to do this if: * You have several VMs. To ensure fault tolerance, you want them to run on different nodes. * You have two chatty VMs. To avoid redundant inter-node routing, you want the VMs to run on the same node. diff --git a/modules/virt-about-pci-passthrough.adoc b/modules/virt-about-pci-passthrough.adoc index 2d1586d01344..ad7670b0a94d 100644 --- a/modules/virt-about-pci-passthrough.adoc +++ b/modules/virt-about-pci-passthrough.adoc @@ -6,6 +6,9 @@ [id="virt-about_pci-passthrough_{context}"] = About preparing a host device for PCI passthrough -To prepare a host device for PCI passthrough by using the CLI, create a `MachineConfig` object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the `permittedHostDevices` field of the `HyperConverged` custom resource (CR). The `permittedHostDevices` list is empty when you first install the {VirtProductName} Operator. +[role="_abstract"] +To prepare a host device for PCI passthrough by using the CLI, create a `MachineConfig` object and add kernel arguments to enable the Input-Output Memory Management Unit (IOMMU). + +Bind the PCI device to the Virtual Function I/O (VFIO) driver and then expose it in the cluster by editing the `permittedHostDevices` field of the `HyperConverged` custom resource (CR). The `permittedHostDevices` list is empty when you first install the {VirtProductName} Operator. To remove a PCI host device from the cluster by using the CLI, delete the PCI device information from the `HyperConverged` CR. diff --git a/modules/virt-about-preallocation.adoc b/modules/virt-about-preallocation.adoc index d3ed773b0059..b8d407a879a5 100644 --- a/modules/virt-about-preallocation.adoc +++ b/modules/virt-about-preallocation.adoc @@ -6,6 +6,7 @@ [id="virt-about-preallocation_{context}"] = About preallocation +[role="_abstract"] The Containerized Data Importer (CDI) can use the QEMU preallocate mode for data volumes to improve write performance. You can use preallocation mode for importing and uploading operations and when creating blank data volumes. If preallocation is enabled, CDI uses the better preallocation method depending on the underlying file system and device type: diff --git a/modules/virt-about-readiness-liveness-probes.adoc b/modules/virt-about-readiness-liveness-probes.adoc index f47108d9bb66..0cb50f04b180 100644 --- a/modules/virt-about-readiness-liveness-probes.adoc +++ b/modules/virt-about-readiness-liveness-probes.adoc @@ -7,6 +7,7 @@ = About readiness and liveness probes +[role="_abstract"] Use readiness and liveness probes to detect and handle unhealthy virtual machines (VMs). You can include one or more probes in the specification of the VM to ensure that traffic does not reach a VM that is not ready for it and that a new VM is created when a VM becomes unresponsive. A _readiness probe_ determines whether a VM is ready to accept service requests. If the probe fails, the VM is removed from the list of available endpoints until the VM is ready. diff --git a/modules/virt-about-reclaiming-statically-provisioned-persistent-volumes.adoc b/modules/virt-about-reclaiming-statically-provisioned-persistent-volumes.adoc index 32d256dc9b73..d4ffc3f3c44c 100644 --- a/modules/virt-about-reclaiming-statically-provisioned-persistent-volumes.adoc +++ b/modules/virt-about-reclaiming-statically-provisioned-persistent-volumes.adoc @@ -7,6 +7,7 @@ = About reclaiming statically provisioned persistent volumes +[role="_abstract"] When you reclaim a persistent volume (PV), you unbind the PV from a persistent volume claim (PVC) and delete the PV. Depending on the underlying storage, you might need to manually delete the shared storage. You can then re-use the PV configuration to create a PV with a different name. diff --git a/modules/virt-about-scratch-space.adoc b/modules/virt-about-scratch-space.adoc index 088806c7ed30..957ea49958cb 100644 --- a/modules/virt-about-scratch-space.adoc +++ b/modules/virt-about-scratch-space.adoc @@ -6,8 +6,10 @@ [id="virt-about-scratch-space_{context}"] = About scratch space +[role="_abstract"] The Containerized Data Importer (CDI) requires scratch space (temporary storage) to complete some operations, such as importing and uploading virtual machine images. During this process, CDI provisions a scratch space PVC equal to the size of the PVC backing the destination data volume (DV). + The scratch space PVC is deleted after the operation completes or aborts. You can define the storage class that is used to bind the scratch space PVC in the `spec.scratchSpaceStorageClass` field of the `HyperConverged` custom resource. @@ -21,7 +23,6 @@ CDI requires requesting scratch space with a `file` volume mode, regardless of t If the origin PVC is backed by `block` volume mode, you must define a storage class capable of provisioning `file` volume mode PVCs. ==== -[discrete] == Manual provisioning If there are no storage classes, CDI uses any PVCs in the project that match the size requirements for the image. diff --git a/modules/virt-about-services.adoc b/modules/virt-about-services.adoc index b38b6d09a72f..7ffa2e51e8fd 100644 --- a/modules/virt-about-services.adoc +++ b/modules/virt-about-services.adoc @@ -7,6 +7,7 @@ [id="virt-about-services_{context}"] = About services +[role="_abstract"] A Kubernetes service exposes network access for clients to an application running on a set of pods. Services offer abstraction, load balancing, and, in the case of the `NodePort` and `LoadBalancer` types, exposure to the outside world. ClusterIP:: Exposes the service on an internal IP address and as a DNS name to other applications within the cluster. A single service can map to multiple virtual machines. When a client tries to connect to the service, the client's request is load balanced among available backends. `ClusterIP` is the default service type. diff --git a/modules/virt-about-smart-cloning.adoc b/modules/virt-about-smart-cloning.adoc index e29f14f0f173..1d62f223fa7a 100644 --- a/modules/virt-about-smart-cloning.adoc +++ b/modules/virt-about-smart-cloning.adoc @@ -6,7 +6,8 @@ [id="virt-about-smart-cloning_{context}"] = About smart-cloning -When a data volume is smart-cloned, the following occurs: +[role="_abstract"] +When a data volume is smart-cloned, a set of operations is performed in a specific order. . A snapshot of the source persistent volume claim (PVC) is created. . A PVC is created from the snapshot. diff --git a/modules/virt-about-ssp-operator.adoc b/modules/virt-about-ssp-operator.adoc index 57f4757c42c1..a14b61572945 100644 --- a/modules/virt-about-ssp-operator.adoc +++ b/modules/virt-about-ssp-operator.adoc @@ -6,4 +6,5 @@ [id="virt-about-ssp-operator_{context}"] = About the Scheduling, Scale, and Performance (SSP) Operator -The SSP Operator, `ssp-operator`, deploys the common templates, the related default boot sources, the pipeline tasks, and the template validator. \ No newline at end of file +[role="_abstract"] +The SSP Operator, `ssp-operator`, deploys the common templates, the related default boot sources, the pipeline tasks, and the template validator. diff --git a/modules/virt-about-static-and-dynamic-ssh-keys.adoc b/modules/virt-about-static-and-dynamic-ssh-keys.adoc index 64ef9a81f4c6..2a8db515c279 100644 --- a/modules/virt-about-static-and-dynamic-ssh-keys.adoc +++ b/modules/virt-about-static-and-dynamic-ssh-keys.adoc @@ -6,6 +6,7 @@ [id="virt-about-static-and-dynamic-ssh-keys_{context}"] = About static and dynamic SSH key management +[role="_abstract"] You can add public SSH keys to virtual machines (VMs) statically at first boot or dynamically at runtime. [NOTE] @@ -13,7 +14,6 @@ You can add public SSH keys to virtual machines (VMs) statically at first boot o Only {op-system-base-full} 9 supports dynamic key injection. ==== -[discrete] [id="static-key-management_{context}"] == Static SSH key management @@ -24,11 +24,10 @@ You can add the key by using one of the following methods: * Add a key to a single VM when you create it by using the web console or the command line. * Add a key to a project by using the web console. Afterwards, the key is automatically added to the VMs that you create in this project. -.Use cases +Use cases: * As a VM owner, you can provision all your newly created VMs with a single key. -[discrete] [id="dynamic-key-management_{context}"] == Dynamic SSH key management @@ -36,7 +35,7 @@ You can enable dynamic SSH key management for a VM with {op-system-base-full} 9 When dynamic key management is disabled, the default key management setting of a VM is determined by the image used for the VM. -.Use cases +Use cases: * Granting or revoking access to VMs: As a cluster administrator, you can grant or revoke remote VM access by adding or removing the keys of individual users from a `Secret` object that is applied to all VMs in a namespace. * User access: You can add your access credentials to all VMs that you create and manage. diff --git a/modules/virt-about-storage-pools-pvc-templates.adoc b/modules/virt-about-storage-pools-pvc-templates.adoc index 05640db9535f..ccec044a5463 100644 --- a/modules/virt-about-storage-pools-pvc-templates.adoc +++ b/modules/virt-about-storage-pools-pvc-templates.adoc @@ -6,13 +6,13 @@ [id="virt-about-storage-pools-pvc-templates_{context}"] = About storage pools created with PVC templates +[role="_abstract"] If you have a single, large persistent volume (PV), you can create a storage pool by defining a PVC template in the hostpath provisioner (HPP) custom resource (CR). A storage pool created with a PVC template can contain multiple HPP volumes. Splitting a PV into smaller volumes provides greater flexibility for data allocation. The PVC template is based on the `spec` stanza of the `PersistentVolumeClaim` object: -.Example `PersistentVolumeClaim` object [source,yaml] ---- apiVersion: v1 diff --git a/modules/virt-about-storage-volumes-for-vm-disks.adoc b/modules/virt-about-storage-volumes-for-vm-disks.adoc index 287691d3683c..eb23d513be59 100644 --- a/modules/virt-about-storage-volumes-for-vm-disks.adoc +++ b/modules/virt-about-storage-volumes-for-vm-disks.adoc @@ -7,6 +7,7 @@ [id="virt-about-storage-volumes-for-vm-disks_{context}"] = About volume and access modes for virtual machine disks +[role="_abstract"] If you use the storage API with known storage providers, the volume and access modes are selected automatically. However, if you use a storage class that does not have a storage profile, you must configure the volume and access mode. For a list of known storage providers for {VirtProductName}, see link:https://catalog.redhat.com/search?searchType=software&badges_and_features=OpenShift+Virtualization&subcategories=Storage[the Red Hat Ecosystem Catalog]. diff --git a/modules/virt-about-tekton-tasks-operator.adoc b/modules/virt-about-tekton-tasks-operator.adoc index 39167716c1b4..74e85de4510a 100644 --- a/modules/virt-about-tekton-tasks-operator.adoc +++ b/modules/virt-about-tekton-tasks-operator.adoc @@ -6,7 +6,8 @@ [id="virt-about-tekton-tasks-operator_{context}"] = About the Tekton Tasks Operator -The Tekton Tasks Operator, `tekton-tasks-operator`, deploys example pipelines showing the usage of OpenShift Pipelines for virtual machines (VMs). This operator also deploys additional OpenShift Pipeline tasks that allow users to create VMs from templates, copy and modify templates, and create data volumes. +[role="_abstract"] +The Tekton Tasks Operator, `tekton-tasks-operator`, deploys example pipelines showing the usage of OpenShift Pipelines for virtual machines (VMs). It also deploys additional OpenShift Pipeline tasks that allow users to create VMs from templates, copy and modify templates, and create data volumes. //image::cnv_components_tekton-tasks-operator.png[tekton-tasks-operator components] diff --git a/modules/virt-about-the-overview-dashboard.adoc b/modules/virt-about-the-overview-dashboard.adoc index af5fc7d51724..e2785d1e491f 100644 --- a/modules/virt-about-the-overview-dashboard.adoc +++ b/modules/virt-about-the-overview-dashboard.adoc @@ -6,6 +6,7 @@ [id="virt-about-the-overview-dashboard_{context}"] = About the {product-title} dashboards page +[role="_abstract"] Access the {product-title} dashboard, which captures high-level information about the cluster, by navigating to *Home* -> *Overview* from the {product-title} web console. diff --git a/modules/virt-about-uefi-mode-for-vms.adoc b/modules/virt-about-uefi-mode-for-vms.adoc index bfa83a464c05..2ef0fa3c2113 100644 --- a/modules/virt-about-uefi-mode-for-vms.adoc +++ b/modules/virt-about-uefi-mode-for-vms.adoc @@ -6,6 +6,7 @@ [id="virt-about-uefi-mode-for-vms_{context}"] = About UEFI mode for virtual machines +[role="_abstract"] Unified Extensible Firmware Interface (UEFI), like legacy BIOS, initializes hardware components and operating system image files when a computer starts. UEFI supports more modern features and customization options than BIOS, enabling faster boot times. It stores all the information about initialization and startup in a file with a `.efi` extension, which is stored on a special partition called EFI System Partition (ESP). The ESP also contains the boot loader programs for the operating system that is installed on the computer. diff --git a/modules/virt-about-upgrading-virt.adoc b/modules/virt-about-upgrading-virt.adoc index a78f9fed6f84..a0924e5eb4ac 100644 --- a/modules/virt-about-upgrading-virt.adoc +++ b/modules/virt-about-upgrading-virt.adoc @@ -6,7 +6,13 @@ [id="virt-about-upgrading-virt_{context}"] = About updating {VirtProductName} -When you install {VirtProductName}, you select an update channel and an approval strategy. The update channel determines the versions that {VirtProductName} will be updated to. The approval strategy setting determines whether updates occur automatically or require manual approval. Both settings can impact supportability. +[role="_abstract"] +When you install {VirtProductName}, you select an update channel and an approval strategy. The update channel determines the versions that {VirtProductName} will be updated to. The approval strategy setting determines whether updates occur automatically or require manual approval. + +[NOTE] +==== +Both settings can impact supportability. +==== [id="recommended-settings_{context}"] == Recommended settings @@ -55,4 +61,4 @@ endif::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[] * Operator Lifecycle Manager (OLM) manages the lifecycle of the {VirtProductName} Operator. The Marketplace Operator, which is deployed during {product-title} installation, makes external Operators available to your cluster. -* OLM provides z-stream and minor version updates for {VirtProductName}. Minor version updates become available when you update {product-title} to the next minor version. You cannot update {VirtProductName} to the next minor version without first updating {product-title}. \ No newline at end of file +* OLM provides z-stream and minor version updates for {VirtProductName}. Minor version updates become available when you update {product-title} to the next minor version. You cannot update {VirtProductName} to the next minor version without first updating {product-title}. diff --git a/modules/virt-about-using-virtual-gpus.adoc b/modules/virt-about-using-virtual-gpus.adoc index 549de80cb639..8fa8a2ae69cb 100644 --- a/modules/virt-about-using-virtual-gpus.adoc +++ b/modules/virt-about-using-virtual-gpus.adoc @@ -6,7 +6,10 @@ [id="virt-about-using-virtual-gpus_{context}"] = About using virtual GPUs with {VirtProductName} -Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). {VirtProductName} can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the `HyperConverged` custom resource (CR). This automation is especially useful for large clusters. +[role="_abstract"] +Some graphics processing unit (GPU) cards support the creation of virtual GPUs (vGPUs). {VirtProductName} can automatically create vGPUs and other mediated devices if an administrator provides configuration details in the `HyperConverged` custom resource (CR). + +This automation is especially useful for large clusters. [NOTE] ==== diff --git a/modules/virt-about-virt-operator.adoc b/modules/virt-about-virt-operator.adoc index bc207bec7543..59a7f9c745e2 100644 --- a/modules/virt-about-virt-operator.adoc +++ b/modules/virt-about-virt-operator.adoc @@ -6,6 +6,7 @@ [id="virt-about-virt-operator_{context}"] = About the {VirtProductName} Operator +[role="_abstract"] The {VirtProductName} Operator, `virt-operator`, deploys, upgrades, and manages {VirtProductName} without disrupting current virtual machine (VM) workloads. In addition, the {VirtProductName} Operator deploys the common instance types and common preferences. image::cnv_components_virt-operator.png[virt-operator components] diff --git a/modules/virt-about-vm-snapshots.adoc b/modules/virt-about-vm-snapshots.adoc index 12ac47075686..85cc0b999dec 100644 --- a/modules/virt-about-vm-snapshots.adoc +++ b/modules/virt-about-vm-snapshots.adoc @@ -6,6 +6,7 @@ [id="virt-about-vm-snapshots_{context}"] = About snapshots +[role="_abstract"] A _snapshot_ represents the state and data of a virtual machine (VM) at a specific point in time. You can use a snapshot to restore an existing VM to a previous state (represented by the snapshot) for backup and disaster recovery or to rapidly roll back to a previous development version. @@ -29,7 +30,7 @@ Cloning a VM with a vTPM device attached to it or creating a new VM from its sna * Restore a VM from a snapshot * Delete an existing VM snapshot -.VM snapshot controller and custom resources +== VM snapshot controller and custom resources The VM snapshot feature introduces three new API objects defined as custom resource definitions (CRDs) for managing snapshots: diff --git a/modules/virt-about-vmis.adoc b/modules/virt-about-vmis.adoc index dc87169a6980..77941f570632 100644 --- a/modules/virt-about-vmis.adoc +++ b/modules/virt-about-vmis.adoc @@ -7,6 +7,7 @@ [id="virt-about-vmis_{context}"] = About virtual machine instances +[role="_abstract"] A virtual machine instance (VMI) is a representation of a running virtual machine (VM). When a VMI is owned by a VM or by another object, you manage it through its owner in the web console or by using the `oc` command-line interface (CLI). A standalone VMI is created and started independently with a script, through automation, or by using other methods in the CLI. In your environment, you might have standalone VMIs that were developed and started outside of the {VirtProductName} environment. You can continue to manage those standalone VMIs by using the CLI. You can also use the web console for specific tasks associated with standalone VMIs: @@ -24,4 +25,4 @@ When you delete a VM, the associated VMI is automatically deleted. You delete a Before you uninstall {VirtProductName}, list and view the standalone VMIs by using the CLI or the web console. Then, delete any outstanding VMIs. ==== -When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the `RestartRequired` VM condition. Changes are effective on the next reboot, and the condition is removed. \ No newline at end of file +When you edit a VM, some settings might be applied to the VMIs dynamically and without the need for a restart. Any change made to a VM object that cannot be applied to the VMIs dynamically will trigger the `RestartRequired` VM condition. Changes are effective on the next reboot, and the condition is removed. diff --git a/modules/virt-about-vms-and-boot-sources.adoc b/modules/virt-about-vms-and-boot-sources.adoc index bbb79b55249e..7fb1838c99f6 100644 --- a/modules/virt-about-vms-and-boot-sources.adoc +++ b/modules/virt-about-vms-and-boot-sources.adoc @@ -6,6 +6,7 @@ [id="virt-about-vms-and-boot-sources_{context}"] = About VM boot sources +[role="_abstract"] Virtual machines (VMs) consist of a VM definition and one or more disks that are backed by data volumes. VM templates enable you to create VMs using predefined specifications. Every template requires a boot source, which is a fully configured disk image including configured drivers. Each template contains a VM definition with a pointer to the boot source. Each boot source has a predefined name and namespace. For some operating systems, a boot source is automatically provided. If it is not provided, then an administrator must prepare a custom boot source. diff --git a/modules/virt-about-vtpm-devices.adoc b/modules/virt-about-vtpm-devices.adoc index 69dfec3ba92d..42937c356cf0 100644 --- a/modules/virt-about-vtpm-devices.adoc +++ b/modules/virt-about-vtpm-devices.adoc @@ -6,10 +6,13 @@ [id="virt-about-vtpm-devices_{context}"] = About vTPM devices +[role="_abstract"] A virtual Trusted Platform Module (vTPM) device functions like a physical Trusted Platform Module (TPM) hardware chip. You can use a vTPM device with any operating system, but Windows 11 requires -the presence of a TPM chip to install or boot. A vTPM device allows VMs created +the presence of a TPM chip to install or boot. + +A vTPM device allows VMs created from a Windows 11 image to function without a physical TPM chip. {VirtProductName} supports persisting vTPM device state by using Persistent Volume Claims (PVCs) for VMs. If you do not specify the storage class for this PVC, {VirtProductName} uses the default storage class for virtualization workloads. If the default storage class for virtualization workloads is not set, {VirtProductName} uses the default storage class for the cluster. diff --git a/modules/virt-about-workload-security.adoc b/modules/virt-about-workload-security.adoc index 689cd06f7ccf..f592756ae31b 100644 --- a/modules/virt-about-workload-security.adoc +++ b/modules/virt-about-workload-security.adoc @@ -6,6 +6,7 @@ [id="virt-about-workload-security_{context}"] = About workload security +[role="_abstract"] By default, virtual machine (VM) workloads do not run with root privileges in {VirtProductName}, and there are no supported {VirtProductName} features that require root privileges. -For each VM, a `virt-launcher` pod runs an instance of `libvirt` in _session mode_ to manage the VM process. In session mode, the `libvirt` daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege. \ No newline at end of file +For each VM, a `virt-launcher` pod runs an instance of `libvirt` in _session mode_ to manage the VM process. In session mode, the `libvirt` daemon runs as a non-root user account and only permits connections from clients that are running under the same user identifier (UID). Therefore, VMs run as unprivileged pods, adhering to the security principle of least privilege. diff --git a/modules/virt-about-workload-updates.adoc b/modules/virt-about-workload-updates.adoc index 470b3abd8fa9..06e874e6c301 100644 --- a/modules/virt-about-workload-updates.adoc +++ b/modules/virt-about-workload-updates.adoc @@ -6,6 +6,7 @@ [id="virt-about-workload-updates_{context}"] = VM workload updates +[role="_abstract"] When you update {VirtProductName}, virtual machine workloads, including `libvirt`, `virt-launcher`, and `qemu`, update automatically if they support live migration. [NOTE] @@ -33,7 +34,6 @@ If you enable both `LiveMigrate` and `Evict`: * VMIs that do not support live migration use the `Evict` update strategy. If a VMI is controlled by a `VirtualMachine` object that has `runStrategy: Always` set, a new VMI is created in a new pod with updated components. -[discrete] [id="migration-attempts-timeouts_{context}"] == Migration attempts and timeouts diff --git a/modules/virt-access-configuration-considerations.adoc b/modules/virt-access-configuration-considerations.adoc index e0e2a66bcd46..0dea1671d083 100644 --- a/modules/virt-access-configuration-considerations.adoc +++ b/modules/virt-access-configuration-considerations.adoc @@ -6,6 +6,7 @@ [id="virt-access-configuration-considerations_{context}"] = Access configuration considerations +[role="_abstract"] Each method for configuring access to a virtual machine (VM) has advantages and limitations, depending on the traffic load and client requirements. [NOTE] diff --git a/modules/virt-accessing-exported-vm-manifests.adoc b/modules/virt-accessing-exported-vm-manifests.adoc index 714dc82a59b2..48eb0f427308 100644 --- a/modules/virt-accessing-exported-vm-manifests.adoc +++ b/modules/virt-accessing-exported-vm-manifests.adoc @@ -6,6 +6,7 @@ [id="virt-accessing-exported-vm-manifests_{context}"] = Accessing exported virtual machine manifests +[role="_abstract"] After you export a virtual machine (VM) or snapshot, you can get the `VirtualMachine` manifest and related information from the export server. .Prerequisites @@ -51,10 +52,8 @@ $ oc get secret export-token- -o jsonpath={.data.token} | base64 -- $ oc get vmexport -o yaml ---- -. Review the `status.links` stanza, which is divided into `external` and `internal` sections. Note the `manifests.url` fields within each section: +. Review the `status.links` stanza, which is divided into `external` and `internal` sections. Note the `manifests.url` fields within each section, for example: + -.Example output - [source,yaml] ---- apiVersion: export.kubevirt.io/v1beta1 diff --git a/modules/virt-accessing-node-exporter-outside-cluster.adoc b/modules/virt-accessing-node-exporter-outside-cluster.adoc index ff8370fd4206..c78afa906401 100644 --- a/modules/virt-accessing-node-exporter-outside-cluster.adoc +++ b/modules/virt-accessing-node-exporter-outside-cluster.adoc @@ -6,6 +6,7 @@ [id="virt-accessing-node-exporter-outside-cluster_{context}"] = Accessing the node exporter service outside the cluster +[role="_abstract"] You can access the node-exporter service outside the cluster and view the exposed metrics. .Prerequisites @@ -28,7 +29,8 @@ $ oc expose service -n $ oc get route -o=custom-columns=NAME:.metadata.name,DNS:.spec.host ---- + -.Example output +Example output: ++ [source,terminal] ---- NAME DNS @@ -41,7 +43,8 @@ node-exporter-service node-exporter-service-dynamation.apps.cluster.example.or $ curl -s http://node-exporter-service-dynamation.apps.cluster.example.org/metrics ---- + -.Example output +Example output: ++ [source,terminal] ---- go_gc_duration_seconds{quantile="0"} 1.5382e-05 diff --git a/modules/virt-accessing-rdp-console.adoc b/modules/virt-accessing-rdp-console.adoc index 82a67e88bb64..15d520830283 100644 --- a/modules/virt-accessing-rdp-console.adoc +++ b/modules/virt-accessing-rdp-console.adoc @@ -6,6 +6,7 @@ [id="virt-accessing-rdp-console_{context}"] = Connecting to a Windows virtual machine with an RDP console +[role="_abstract"] Create a Kubernetes `Service` object to connect to a Windows virtual machine (VM) by using your local Remote Desktop Protocol (RDP) client. .Prerequisites @@ -82,7 +83,8 @@ $ oc create -f .yaml $ oc get service -n example-namespace ---- + -.Example output for `NodePort` service +Example output for `NodePort` service: ++ [source,terminal] ---- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE @@ -96,7 +98,8 @@ rdpservice NodePort 172.30.232.73 3389:30000/TCP 5m $ oc get node -o wide ---- + -.Example output +Example output: ++ [source,terminal] ---- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP diff --git a/modules/virt-accessing-serial-console.adoc b/modules/virt-accessing-serial-console.adoc index 020df70764f0..b6e3c2854770 100644 --- a/modules/virt-accessing-serial-console.adoc +++ b/modules/virt-accessing-serial-console.adoc @@ -6,6 +6,7 @@ [id="virt-accessing-serial-console_{context}"] = Accessing the serial console of a virtual machine instance +[role="_abstract"] The `virtctl console` command opens a serial console to the specified virtual machine instance. diff --git a/modules/virt-accessing-vnc-console.adoc b/modules/virt-accessing-vnc-console.adoc index 24c2ecde303b..ee1b5eb14836 100644 --- a/modules/virt-accessing-vnc-console.adoc +++ b/modules/virt-accessing-vnc-console.adoc @@ -6,6 +6,7 @@ [id="virt-accessing-vnc-console_{context}"] = Accessing the graphical console of a virtual machine instances with VNC +[role="_abstract"] The `virtctl` client utility can use the `remote-viewer` function to open a graphical console to a running virtual machine instance. This capability is included in the `virt-viewer` package. diff --git a/modules/virt-add-boot-order-web.adoc b/modules/virt-add-boot-order-web.adoc index b90a4145d106..60e47efa4ab9 100644 --- a/modules/virt-add-boot-order-web.adoc +++ b/modules/virt-add-boot-order-web.adoc @@ -7,7 +7,8 @@ [id="virt-add-boot-order-web_{context}"] = Adding items to a boot order list in the web console -Add items to a boot order list by using the web console. +[role="_abstract"] +You can add items to a boot order list by using the web console. .Procedure @@ -24,7 +25,7 @@ Add items to a boot order list by using the web console. . Add any additional disks or NICs to the boot order list. . Click *Save*. - ++ [NOTE] ==== If the virtual machine is running, changes to *Boot Order* will not take effect until you restart the virtual machine. diff --git a/modules/virt-add-custom-golden-image-heterogeneous-cluster.adoc b/modules/virt-add-custom-golden-image-heterogeneous-cluster.adoc index debfe0c2eb16..d70eeb8eb3e0 100644 --- a/modules/virt-add-custom-golden-image-heterogeneous-cluster.adoc +++ b/modules/virt-add-custom-golden-image-heterogeneous-cluster.adoc @@ -10,6 +10,7 @@ :FeatureName: Golden image support for heterogeneous clusters include::snippets/technology-preview.adoc[] +[role="_abstract"] Add a custom golden image in a heterogeneous cluster by setting the `ssp.kubevirt.io/dict.architectures` annotation in the `spec.dataImportCronTemplates.metadata.annotations` stanza of the `HyperConverged` custom resource (CR). This annotation lists the architectures supported by the image. .Prerequisites diff --git a/modules/virt-add-disk-to-vm.adoc b/modules/virt-add-disk-to-vm.adoc index a286cb762df5..42ce89296110 100644 --- a/modules/virt-add-disk-to-vm.adoc +++ b/modules/virt-add-disk-to-vm.adoc @@ -7,6 +7,7 @@ = Adding a disk to a virtual machine +[role="_abstract"] You can add a virtual disk to a virtual machine (VM) by using the {product-title} web console. .Procedure @@ -23,7 +24,7 @@ You can add a virtual disk to a virtual machine (VM) by using the {product-title .. Optional: You can clear *Apply optimized StorageProfile settings* to change the *Volume Mode* and *Access Mode* for the virtual disk. If you do not specify these parameters, the system uses the default values from the `kubevirt-storage-class-defaults` config map. . Click *Add*. - ++ [NOTE] ==== If the VM is running, you must restart the VM to apply the change. diff --git a/modules/virt-adding-a-boot-source-web.adoc b/modules/virt-adding-a-boot-source-web.adoc index 220f9d81fb68..dad518831a1d 100644 --- a/modules/virt-adding-a-boot-source-web.adoc +++ b/modules/virt-adding-a-boot-source-web.adoc @@ -5,6 +5,7 @@ [id="virt-adding-a-boot-source-web_{context}"] = Adding boot source to a template +[role="_abstract"] You can add a boot source or operating system image to a virtual machine (VM) template. When templates are configured with an operating system image, they are labeled *Source available* on the *Catalog* page. After you add a boot source to a template, you can create a VM from the template. There are four methods for selecting and adding a boot source in the web console: @@ -48,4 +49,6 @@ Provided boot sources are updated automatically to the latest version of the ope .. Click *Save and import* if you imported content from a URL or the registry. .. Click *Save and clone* if you cloned an existing PVC. +.Result + Your custom virtual machine template with a boot source is listed on the *Catalog* page. You can use this template to create a virtual machine. diff --git a/modules/virt-adding-container-disk-as-cd.adoc b/modules/virt-adding-container-disk-as-cd.adoc index b4c776f96b4d..773678174caf 100644 --- a/modules/virt-adding-container-disk-as-cd.adoc +++ b/modules/virt-adding-container-disk-as-cd.adoc @@ -8,6 +8,7 @@ [id="virt-adding-container-disk-as-cd_{context}"] = Installing VirtIO drivers from a container disk added as a SATA CD drive +[role="_abstract"] You can install VirtIO drivers from a container disk that you add to a Windows virtual machine (VM) as a SATA CD drive. [TIP] diff --git a/modules/virt-adding-kernel-arguments-enable-iommu.adoc b/modules/virt-adding-kernel-arguments-enable-iommu.adoc index 70337c36c016..be1f894f49ba 100644 --- a/modules/virt-adding-kernel-arguments-enable-iommu.adoc +++ b/modules/virt-adding-kernel-arguments-enable-iommu.adoc @@ -7,6 +7,7 @@ [id="virt-adding-kernel-arguments-enable-IOMMU_{context}"] = Adding kernel arguments to enable the IOMMU driver +[role="_abstract"] To enable the IOMMU driver in the kernel, create the `MachineConfig` object and add the kernel arguments. .Prerequisites @@ -61,7 +62,8 @@ $ oc create -f 100-worker-kernel-arg-iommu.yaml $ oc get MachineConfig ---- + -.Example output +Example output: ++ [source,terminal] ---- NAME IGNITIONVERSION AGE @@ -85,9 +87,10 @@ $ dmesg | grep -i iommu ---- * If IOMMU is enabled, output is displayed as shown in the following example: + -.Example output +Example output: ++ [source,terminal] ---- Intel: [ 0.000000] DMAR: Intel(R) IOMMU Driver AMD: [ 0.000000] AMD-Vi: IOMMU Initialized ----- \ No newline at end of file +---- diff --git a/modules/virt-adding-key-creating-vm-template.adoc b/modules/virt-adding-key-creating-vm-template.adoc index 8f556f8b939c..175c9eb95d80 100644 --- a/modules/virt-adding-key-creating-vm-template.adoc +++ b/modules/virt-adding-key-creating-vm-template.adoc @@ -16,11 +16,13 @@ endif::[] = {title} when creating a VM from a template ifdef::static-key[] +[role="_abstract"] You can add a statically managed public SSH key when you create a virtual machine (VM) by using the {product-title} web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. Optional: You can add a key to a project. Afterwards, this key is added automatically to VMs that you create in the project. endif::[] ifdef::dynamic-key[] +[role="_abstract"] You can enable dynamic public SSH key injection when you create a virtual machine (VM) from a template by using the {product-title} web console. Then, you can update the key at runtime. [NOTE] diff --git a/modules/virt-adding-public-key-vm-cli.adoc b/modules/virt-adding-public-key-vm-cli.adoc index bebd28297b70..40f5361a742c 100644 --- a/modules/virt-adding-public-key-vm-cli.adoc +++ b/modules/virt-adding-public-key-vm-cli.adoc @@ -6,6 +6,7 @@ [id="virt-adding-public-key-vm-cli_{context}"] = Adding a key when creating a VM by using the CLI +[role="_abstract"] You can add a statically managed public SSH key when you create a virtual machine (VM) by using the command line. The key is added to the VM at first boot. The key is added to the VM as a cloud-init data source. This method separates the access credentials from the application data in the cloud-init user data. This method does not affect cloud-init user data. @@ -17,9 +18,10 @@ The key is added to the VM as a cloud-init data source. This method separates th .Procedure -. Create a manifest file for a `VirtualMachine` object and a `Secret` object: +. Create a manifest file for a `VirtualMachine` object and a `Secret` object. ++ +Example manifest: + -.Example manifest [source,yaml] ---- include::snippets/virt-static-key.yaml[] @@ -50,7 +52,8 @@ $ virtctl start vm example-vm -n example-namespace $ oc describe vm example-vm -n example-namespace ---- + -.Example output +Example output: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-adding-secret-configmap-service-account-to-vm.adoc b/modules/virt-adding-secret-configmap-service-account-to-vm.adoc index b203d61ae7e5..fd675b745df5 100644 --- a/modules/virt-adding-secret-configmap-service-account-to-vm.adoc +++ b/modules/virt-adding-secret-configmap-service-account-to-vm.adoc @@ -7,7 +7,8 @@ = Adding a secret, config map, or service account to a virtual machine -You add a secret, config map, or service account to a virtual machine by using the {product-title} web console. +[role="_abstract"] +You can add a secret, config map, or service account to a virtual machine by using the {product-title} web console. These resources are added to the virtual machine as disks. You then mount the secret, config map, or service account as you would mount any other disk. diff --git a/modules/virt-adding-tls-certificates-for-authenticating-dv-imports.adoc b/modules/virt-adding-tls-certificates-for-authenticating-dv-imports.adoc index d67cefaf6a74..c84b411495ed 100644 --- a/modules/virt-adding-tls-certificates-for-authenticating-dv-imports.adoc +++ b/modules/virt-adding-tls-certificates-for-authenticating-dv-imports.adoc @@ -6,6 +6,7 @@ [id="virt-adding-tls-certificates-for-authenticating-dv-imports_{context}"] = Adding TLS certificates for authenticating data volume imports +[role="_abstract"] TLS certificates for registry or HTTPS endpoints must be added to a config map to import data from these sources. This config map must be present in the namespace of the destination data volume. diff --git a/modules/virt-adding-vm-to-service-mesh.adoc b/modules/virt-adding-vm-to-service-mesh.adoc index 4cfb0956f5fb..0b0998d277a6 100644 --- a/modules/virt-adding-vm-to-service-mesh.adoc +++ b/modules/virt-adding-vm-to-service-mesh.adoc @@ -6,6 +6,7 @@ [id="virt-adding-vm-to-service-mesh_{context}"] = Adding a virtual machine to a service mesh +[role="_abstract"] To add a virtual machine (VM) workload to a service mesh, enable automatic sidecar injection in the VM configuration file by setting the `sidecar.istio.io/inject` annotation to `true`. Then expose your VM as a service to view your application in the mesh. [IMPORTANT] @@ -20,9 +21,10 @@ To avoid port conflicts, do not use ports used by the Istio sidecar proxy. These .Procedure -. Edit the VM configuration file to add the `sidecar.istio.io/inject: "true"` annotation: +. Edit the VM configuration file to add the `sidecar.istio.io/inject: "true"` annotation. ++ +Example configuration file: + -.Example configuration file [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-adding-vtpm-to-vm.adoc b/modules/virt-adding-vtpm-to-vm.adoc index 5ca4c7f1aa8f..e9803c4d874a 100644 --- a/modules/virt-adding-vtpm-to-vm.adoc +++ b/modules/virt-adding-vtpm-to-vm.adoc @@ -6,6 +6,7 @@ [id="virt-adding-vtpm-to-vm_{context}"] = Adding a vTPM device to a virtual machine +[role="_abstract"] Adding a virtual Trusted Platform Module (vTPM) device to a virtual machine (VM) allows you to run a VM created from a Windows 11 image without a physical TPM device. A vTPM device also stores secrets for that VM. diff --git a/modules/virt-additional-scc-for-kubevirt-controller.adoc b/modules/virt-additional-scc-for-kubevirt-controller.adoc index 172022f632d2..2af509c7bc2d 100644 --- a/modules/virt-additional-scc-for-kubevirt-controller.adoc +++ b/modules/virt-additional-scc-for-kubevirt-controller.adoc @@ -6,6 +6,7 @@ [id="virt-additional-scc-for-kubevirt-controller_{context}"] = Additional SCCs and permissions for the kubevirt-controller service account +[role="_abstract"] Security context constraints (SCCs) control permissions for pods. These permissions include actions that a pod, a collection of containers, can perform and what resources it can access. You can use SCCs to define a set of conditions that a pod must run with to be accepted into the system. The `virt-controller` is a cluster controller that creates the `virt-launcher` pods for virtual machines in the cluster. @@ -19,18 +20,18 @@ The `kubevirt-controller` service account is granted additional SCCs and Linux c The `kubevirt-controller` service account is granted the following SCCs: -* `scc.AllowHostDirVolumePlugin = true` + +`scc.AllowHostDirVolumePlugin = true`:: This allows virtual machines to use the hostpath volume plugin. -* `scc.AllowPrivilegedContainer = false` + +`scc.AllowPrivilegedContainer = false`:: This ensures the `virt-launcher` pod is not run as a privileged container. -* `scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE"}` +`scc.AllowedCapabilities = []corev1.Capability{"SYS_NICE", "NET_BIND_SERVICE"}`:: -** `SYS_NICE` allows setting the CPU affinity. -** `NET_BIND_SERVICE` allows DHCP and Slirp operations. +* `SYS_NICE` allows setting the CPU affinity. +* `NET_BIND_SERVICE` allows DHCP and Slirp operations. -.Viewing the SCC and RBAC definitions for the kubevirt-controller +== Viewing the SCC and RBAC definitions for the kubevirt-controller You can view the `SecurityContextConstraints` definition for the `kubevirt-controller` by using the `oc` tool: diff --git a/modules/virt-analyzing-datavolume-conditions-and-events.adoc b/modules/virt-analyzing-datavolume-conditions-and-events.adoc index 00dce9aaf6f0..7e0edb0713c9 100644 --- a/modules/virt-analyzing-datavolume-conditions-and-events.adoc +++ b/modules/virt-analyzing-datavolume-conditions-and-events.adoc @@ -6,10 +6,13 @@ [id="virt-analyzing-datavolume-conditions-and-events_{context}"] = Analyzing data volume conditions and events +[role="_abstract"] By inspecting the `Conditions` and `Events` sections generated by the `describe` command, you determine the state of the data volume in relation to persistent volume claims (PVCs), and whether or -not an operation is actively running or completed. You might also receive messages +not an operation is actively running or completed. + +You might also receive messages that offer specific details about the status of the data volume, and how it came to be in its current state. @@ -28,9 +31,10 @@ The `Message` indicates which PVC owns the data volume. + `Message`, in the `Events` section, provides further details including how long the PVC has been bound (`Age`) and by what resource (`From`), -in this case `datavolume-controller`: +in this case `datavolume-controller`. ++ +Example output: + -.Example output [source,terminal] ---- Status: @@ -62,9 +66,10 @@ the `Message` displays an inability to connect due to a `404`, listed in the + From this information, you conclude that an import operation was running, creating contention for other operations that are -attempting to access the data volume: +attempting to access the data volume. ++ +Example output: + -.Example output [source,terminal] ---- Status: @@ -85,9 +90,10 @@ Status: * `Ready` – If `Type` is `Ready` and `Status` is `True`, then the data volume is ready to be used, as in the following example. If the data volume is not ready to be -used, the `Status` is `False`: +used, the `Status` is `False`. ++ +Example output: + -.Example output [source,terminal] ---- Status: diff --git a/modules/virt-applying-node-placement-rules.adoc b/modules/virt-applying-node-placement-rules.adoc index 68affff54142..41ad63410964 100644 --- a/modules/virt-applying-node-placement-rules.adoc +++ b/modules/virt-applying-node-placement-rules.adoc @@ -7,9 +7,11 @@ = Applying node placement rules ifndef::openshift-rosa,openshift-dedicated[] +[role="_abstract"] You can apply node placement rules by editing a `Subscription`, `HyperConverged`, or `HostPathProvisioner` object using the command line. endif::openshift-rosa,openshift-dedicated[] ifdef::openshift-rosa,openshift-dedicated[] +[role="_abstract"] You can apply node placement rules by editing a `HyperConverged` or `HostPathProvisioner` object using the command line. endif::openshift-rosa,openshift-dedicated[] diff --git a/modules/virt-assigning-pci-device-virtual-machine.adoc b/modules/virt-assigning-pci-device-virtual-machine.adoc index 3815f8e410a7..9cf244fc689d 100644 --- a/modules/virt-assigning-pci-device-virtual-machine.adoc +++ b/modules/virt-assigning-pci-device-virtual-machine.adoc @@ -6,12 +6,14 @@ [id="virt-assigning-pci-device-virtual-machine_{context}"] = Assigning a PCI device to a virtual machine +[role="_abstract"] When a PCI device is available in a cluster, you can assign it to a virtual machine and enable PCI passthrough. .Procedure * Assign the PCI device to a virtual machine as a host device. + -.Example +Example: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 @@ -31,7 +33,8 @@ spec: [source,terminal] $ lspci -nnk | grep NVIDIA + -.Example output +Example output: ++ [source,terminal] ---- $ 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) diff --git a/modules/virt-assigning-vgpu-vm-cli.adoc b/modules/virt-assigning-vgpu-vm-cli.adoc index a0a30dd4a123..38bea28ad112 100644 --- a/modules/virt-assigning-vgpu-vm-cli.adoc +++ b/modules/virt-assigning-vgpu-vm-cli.adoc @@ -6,6 +6,7 @@ [id="virt-assigning-mdev-vm-cli_{context}"] = Assigning a vGPU to a VM by using the CLI +[role="_abstract"] Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs). .Prerequisites @@ -15,9 +16,10 @@ Assign mediated devices such as virtual GPUs (vGPUs) to virtual machines (VMs). .Procedure -* Assign the mediated device to a virtual machine (VM) by editing the `spec.domain.devices.gpus` stanza of the `VirtualMachine` manifest: +* Assign the mediated device to a virtual machine (VM) by editing the `spec.domain.devices.gpus` stanza of the `VirtualMachine` manifest. ++ +Example virtual machine manifest: + -.Example virtual machine manifest [source,yaml] ---- apiVersion: kubevirt.io/v1 @@ -41,4 +43,4 @@ spec: [source,terminal] ---- $ lspci -nnk | grep ----- \ No newline at end of file +---- diff --git a/modules/virt-assigning-vgpu-vm-web.adoc b/modules/virt-assigning-vgpu-vm-web.adoc index 109fb6ab3c29..925d48bc744f 100644 --- a/modules/virt-assigning-vgpu-vm-web.adoc +++ b/modules/virt-assigning-vgpu-vm-web.adoc @@ -6,7 +6,9 @@ [id="virt-assigning-vgpu-vm-web_{context}"] = Assigning a vGPU to a VM by using the web console +[role="_abstract"] You can assign virtual GPUs to virtual machines by using the {product-title} web console. + [NOTE] ==== You can add hardware devices to virtual machines created from customized templates or a YAML file. You cannot add devices to pre-supplied boot source templates for specific operating systems. @@ -29,4 +31,4 @@ You can add hardware devices to virtual machines created from customized templat . Click *Save*. .Verification -* To confirm that the devices were added to the VM, click the *YAML* tab and review the `VirtualMachine` configuration. Mediated devices are added to the `spec.domain.devices` stanza. \ No newline at end of file +* To confirm that the devices were added to the VM, click the *YAML* tab and review the `VirtualMachine` configuration. Mediated devices are added to the `spec.domain.devices` stanza. diff --git a/modules/virt-attaching-virtio-disk-to-windows-existing.adoc b/modules/virt-attaching-virtio-disk-to-windows-existing.adoc index 785ece68f5da..73dba43fc9ea 100644 --- a/modules/virt-attaching-virtio-disk-to-windows-existing.adoc +++ b/modules/virt-attaching-virtio-disk-to-windows-existing.adoc @@ -6,6 +6,7 @@ [id="virt-attaching-virtio-disk-to-windows-existing_{context}"] = Attaching VirtIO container disk to an existing Windows VM +[role="_abstract"] You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done to an existing VM. .Procedure @@ -14,4 +15,4 @@ You must attach the VirtIO container disk to the Windows VM to install the neces . Go to *VM Details* -> *Configuration* -> *Storage*. . Select the *Mount Windows drivers disk* checkbox. . Click *Save*. -. Start the VM, and connect to a graphical console. \ No newline at end of file +. Start the VM, and connect to a graphical console. diff --git a/modules/virt-attaching-virtio-disk-to-windows.adoc b/modules/virt-attaching-virtio-disk-to-windows.adoc index acb9e300f93b..e1afbc8eb67c 100644 --- a/modules/virt-attaching-virtio-disk-to-windows.adoc +++ b/modules/virt-attaching-virtio-disk-to-windows.adoc @@ -6,6 +6,7 @@ [id="virt-attaching-virtio-disk-to-windows_{context}"] = Attaching VirtIO container disk to Windows VMs during installation +[role="_abstract"] You must attach the VirtIO container disk to the Windows VM to install the necessary Windows drivers. This can be done during creation of the VM. .Procedure @@ -15,4 +16,6 @@ You must attach the VirtIO container disk to the Windows VM to install the neces . Click the *Customize VirtualMachine parameters*. . Click *Create VirtualMachine*. +.Result + After the VM is created, the `virtio-win` SATA CD disk will be attached to the VM. diff --git a/modules/virt-attaching-vm-secondary-network-cli.adoc b/modules/virt-attaching-vm-secondary-network-cli.adoc index 3ce4fcf31a4c..23bb51677820 100644 --- a/modules/virt-attaching-vm-secondary-network-cli.adoc +++ b/modules/virt-attaching-vm-secondary-network-cli.adoc @@ -6,6 +6,7 @@ [id="virt-attaching-vm-secondary-network-cli_{context}"] = Configuring a VM network interface by using the CLI +[role="_abstract"] You can configure a virtual machine (VM) network interface for a bridge network by using the command line. .Prerequisites @@ -49,7 +50,7 @@ $ oc apply -f example-vm.yaml ---- . Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. - ++ [NOTE] ==== When running {VirtProductName} on {ibm-z-name} using an OSA card, you must register the MAC address of the device. For more information, see link:https://www.ibm.com/docs/en/linux-on-systems?topic=choices-osa-interface-traffic-forwarding[OSA interface traffic forwarding] (IBM documentation). diff --git a/modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc b/modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc index 10e5b2180330..e93d7f2582b4 100644 --- a/modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc +++ b/modules/virt-attaching-vm-to-ovn-secondary-nw-cli.adoc @@ -6,6 +6,7 @@ [id="virt-attaching-vm-to-ovn-secondary-nw-cli_{context}"] = Attaching a virtual machine to an OVN-Kubernetes secondary network using the CLI +[role="_abstract"] You can connect a virtual machine (VM) to the OVN-Kubernetes secondary network by including the network details in the VM configuration. .Prerequisites @@ -53,4 +54,4 @@ spec: $ oc apply -f .yaml ---- -. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. \ No newline at end of file +. Optional: If you edited a running virtual machine, you must restart it for the changes to take effect. diff --git a/modules/virt-attaching-vm-to-primary-udn-web.adoc b/modules/virt-attaching-vm-to-primary-udn-web.adoc index bad8f14964ac..64fd4f8dee56 100644 --- a/modules/virt-attaching-vm-to-primary-udn-web.adoc +++ b/modules/virt-attaching-vm-to-primary-udn-web.adoc @@ -3,9 +3,10 @@ // * virt/vm_networking/virt-connecting-vm-to-primary-udn.adoc :_mod-docs-content-type: PROCEDURE -[id="virt-attaching-vm-to-primary-udn-web_{context}"] +[id="virt-attaching-vm-to-primary-udn-web_{context}"] = Attaching a virtual machine to the primary user-defined network by using the web console +[role="_abstract"] You can connect a virtual machine (VM) to the primary user-defined network (UDN) by using the {product-title} web console. VMs that are created in a namespace where the primary UDN is configured are automatically attached to the UDN with the Layer 2 bridge network binding plugin. To attach a VM to the primary UDN by using the Plug a Simple Socket Transport (passt) binding, enable the plugin and configure the VM network interface in the web console. @@ -41,4 +42,4 @@ include::snippets/technology-preview.adoc[] . Click *Save*. -. If your VM is running, restart it for the changes to take effect. \ No newline at end of file +. If your VM is running, restart it for the changes to take effect. diff --git a/modules/virt-attaching-vm-to-primary-udn.adoc b/modules/virt-attaching-vm-to-primary-udn.adoc index 8b1fa2736dab..b9e951d41105 100644 --- a/modules/virt-attaching-vm-to-primary-udn.adoc +++ b/modules/virt-attaching-vm-to-primary-udn.adoc @@ -6,6 +6,7 @@ [id="virt-attaching-vm-to-primary-udn_{context}"] = Attaching a virtual machine to the primary user-defined network by using the CLI +[role="_abstract"] You can connect a virtual machine (VM) to the primary user-defined network (UDN) by using the CLI. .Prerequisites @@ -60,4 +61,4 @@ include::snippets/technology-preview.adoc[] [source,terminal] ---- $ oc apply -f .yaml ----- \ No newline at end of file +---- diff --git a/modules/virt-attaching-vm-to-secondary-udn.adoc b/modules/virt-attaching-vm-to-secondary-udn.adoc index a204cd2b529c..de6722972235 100644 --- a/modules/virt-attaching-vm-to-secondary-udn.adoc +++ b/modules/virt-attaching-vm-to-secondary-udn.adoc @@ -6,6 +6,7 @@ [id="virt-attaching-vm-to-secondary-udn_{context}"] = Attaching a virtual machine to secondary user-defined networks by using the CLI +[role="_abstract"] You can connect a virtual machine (VM) to multiple secondary cluster-scoped user-defined networks (CUDNs) by configuring the interface binding. .Prerequisites @@ -60,4 +61,4 @@ where: [NOTE] ==== When running {VirtProductName} on {ibm-z-name} using an OSA card, be aware that the OSA card only forwards network traffic to devices that are registered with the OSA device. As a result, any traffic destined for unregistered devices is not forwarded. -==== \ No newline at end of file +==== diff --git a/modules/virt-attaching-vm-to-sriov-network-web-console.adoc b/modules/virt-attaching-vm-to-sriov-network-web-console.adoc index 55f22670f89d..113f5f3b8c04 100644 --- a/modules/virt-attaching-vm-to-sriov-network-web-console.adoc +++ b/modules/virt-attaching-vm-to-sriov-network-web-console.adoc @@ -6,6 +6,7 @@ [id="virt-attaching-vm-to-sriov-network-web-console_{context}"] = Connecting a VM to an SR-IOV network by using the web console +[role="_abstract"] You can connect a VM to the SR-IOV network by including the network details in the VM configuration. .Prerequisites diff --git a/modules/virt-attaching-vm-to-sriov-network.adoc b/modules/virt-attaching-vm-to-sriov-network.adoc index 8f566dded3ef..870435f99efe 100644 --- a/modules/virt-attaching-vm-to-sriov-network.adoc +++ b/modules/virt-attaching-vm-to-sriov-network.adoc @@ -6,6 +6,7 @@ [id="virt-attaching-vm-to-sriov-network_{context}"] = Connecting a virtual machine to an SR-IOV network by using the CLI +[role="_abstract"] You can connect the virtual machine (VM) to the SR-IOV network by including the network details in the VM configuration. .Prerequisites diff --git a/modules/virt-automatic-certificates-renewal.adoc b/modules/virt-automatic-certificates-renewal.adoc index ad70465b00a3..124e4fad3253 100644 --- a/modules/virt-automatic-certificates-renewal.adoc +++ b/modules/virt-automatic-certificates-renewal.adoc @@ -6,9 +6,10 @@ [id="virt-automatic-certificates-renewal_{context}"] = TLS certificates +[role="_abstract"] TLS certificates for {VirtProductName} components are renewed and rotated automatically. You are not required to refresh them manually. -.Automatic renewal schedules +== Automatic renewal schedules TLS certificates are automatically deleted and replaced according to the following schedule: diff --git a/modules/virt-autoupdate-custom-bootsource.adoc b/modules/virt-autoupdate-custom-bootsource.adoc index 7ed99472659b..abb5f1eccb96 100644 --- a/modules/virt-autoupdate-custom-bootsource.adoc +++ b/modules/virt-autoupdate-custom-bootsource.adoc @@ -7,6 +7,7 @@ [id="virt-autoupdate-custom-bootsource_{context}"] = Enabling automatic updates for custom boot sources +[role="_abstract"] {VirtProductName} automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the `HyperConverged` custom resource (CR). .Prerequisites @@ -25,7 +26,6 @@ $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} . Edit the `HyperConverged` CR, adding the appropriate template and boot source in the `dataImportCronTemplates` section. For example: + -.Example custom resource [source,yaml] ---- apiVersion: hco.kubevirt.io/v1beta1 diff --git a/modules/virt-aws-bm.adoc b/modules/virt-aws-bm.adoc index edafaa8b7955..7fcc9f90e14a 100644 --- a/modules/virt-aws-bm.adoc +++ b/modules/virt-aws-bm.adoc @@ -6,6 +6,7 @@ ifndef::openshift-rosa,openshift-dedicated,openshift-rosa-hcp[] [id="virt-aws-bm_{context}"] = {VirtProductName} on AWS bare metal +[role="_abstract"] You can run {VirtProductName} on an Amazon Web Services (AWS) bare metal {product-title} cluster. [NOTE] @@ -115,4 +116,4 @@ Hosted control planes (HCPs):: -- * HCPs for {VirtProductName} are not currently supported on AWS infrastructure. -- -endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] \ No newline at end of file +endif::openshift-dedicated,openshift-rosa,openshift-rosa-hcp[] diff --git a/modules/virt-binding-devices-vfio-driver.adoc b/modules/virt-binding-devices-vfio-driver.adoc index 02504353e16a..27478a64321c 100644 --- a/modules/virt-binding-devices-vfio-driver.adoc +++ b/modules/virt-binding-devices-vfio-driver.adoc @@ -5,7 +5,11 @@ :_mod-docs-content-type: PROCEDURE [id="virt-binding-devices-vfio-driver_{context}"] = Binding PCI devices to the VFIO driver -To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for `vendor-ID` and `device-ID` from each device and create a list with the values. Add this list to the `MachineConfig` object. The `MachineConfig` Operator generates the `/etc/modprobe.d/vfio.conf` on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver. + +[role="_abstract"] +To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values for `vendor-ID` and `device-ID` from each device and create a list with the values. Add this list to the `MachineConfig` object. + +The `MachineConfig` Operator generates the `/etc/modprobe.d/vfio.conf` on the nodes with the PCI devices, and binds the PCI devices to the VFIO driver. .Prerequisites * You added kernel arguments to enable IOMMU for the CPU. @@ -19,7 +23,8 @@ To bind PCI devices to the VFIO (Virtual Function I/O) driver, obtain the values $ lspci -nnv | grep -i nvidia ---- + -.Example output +Example output: ++ [source,terminal] ---- 02:01.0 3D controller [0302]: NVIDIA Corporation GV100GL [Tesla V100 PCIe 32GB] [10de:1eb8] (rev a1) @@ -32,7 +37,8 @@ $ lspci -nnv | grep -i nvidia include::snippets/butane-version.adoc[] ==== + -.Example +Example: ++ [source,yaml,subs="attributes+"] ---- variant: openshift @@ -80,7 +86,8 @@ $ oc apply -f 100-worker-vfiopci.yaml $ oc get MachineConfig ---- + -.Example output +Example output: ++ [source,terminal] ---- NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE @@ -103,7 +110,8 @@ $ lspci -nnk -d 10de: ---- The output confirms that the VFIO driver is being used. + -.Example output +Example output: ++ ---- 04:00.0 3D controller [0302]: NVIDIA Corporation GP102GL [Tesla P40] [10de:1eb8] (rev a1) Subsystem: NVIDIA Corporation Device [10de:1eb8] diff --git a/modules/virt-booting-vms-uefi-mode.adoc b/modules/virt-booting-vms-uefi-mode.adoc index 0550cb9f7f56..b61a69e8281e 100644 --- a/modules/virt-booting-vms-uefi-mode.adoc +++ b/modules/virt-booting-vms-uefi-mode.adoc @@ -6,6 +6,7 @@ [id="virt-booting-vms-uefi-mode_{context}"] = Booting virtual machines in UEFI mode +[role="_abstract"] You can configure a virtual machine to boot in UEFI mode by editing the `VirtualMachine` manifest. .Prerequisites @@ -14,9 +15,9 @@ You can configure a virtual machine to boot in UEFI mode by editing the `Virtual .Procedure -. Edit or create a `VirtualMachine` manifest file. Use the `spec.firmware.bootloader` stanza to configure UEFI mode: +. Edit or create a `VirtualMachine` manifest file. Use the `spec.firmware.bootloader` stanza to configure UEFI mode. + -.Booting in UEFI mode with secure boot active +Booting in UEFI mode with secure boot active: [source,yaml] ---- apiversion: kubevirt.io/v1 diff --git a/modules/virt-building-real-time-container-disk-image.adoc b/modules/virt-building-real-time-container-disk-image.adoc index 651c13e9570b..60eefc11b074 100644 --- a/modules/virt-building-real-time-container-disk-image.adoc +++ b/modules/virt-building-real-time-container-disk-image.adoc @@ -6,7 +6,10 @@ [id="virt-building-real-time-container-disk-image_{context}"] = Building a container disk image for {op-system-base} virtual machines -You can build a custom {op-system-base-full} 8 OS image in `qcow2` format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmUnderTestContainerDiskImage` attribute of the real-time checkup config map. +[role="_abstract"] +You can build a custom {op-system-base-full} 8 OS image in `qcow2` format and use it to create a container disk image. + +You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmUnderTestContainerDiskImage` attribute of the real-time checkup config map. To build a container disk image, you must create an image builder virtual machine (VM). The _image builder VM_ is a {op-system-base} 8 VM that can be used to build custom {op-system-base} images. @@ -162,4 +165,4 @@ $ podman build . -t real-time-rhel:latest $ podman push real-time-rhel:latest ---- -. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the real-time checkup config map. \ No newline at end of file +. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the real-time checkup config map. diff --git a/modules/virt-building-vm-containerdisk-image.adoc b/modules/virt-building-vm-containerdisk-image.adoc index 5c87ba56c0ae..1cb405397316 100644 --- a/modules/virt-building-vm-containerdisk-image.adoc +++ b/modules/virt-building-vm-containerdisk-image.adoc @@ -6,7 +6,10 @@ [id="virt-building-vm-containerdisk-image_{context}"] = Building a container disk image for {op-system-base} virtual machines -You can build a custom {op-system-base-full} 9 OS image in `qcow2` format and use it to create a container disk image. You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmContainerDiskImage` attribute of the DPDK checkup config map. +[role="_abstract"] +You can build a custom {op-system-base-full} 9 OS image in `qcow2` format and use it to create a container disk image. + +You can store the container disk image in a registry that is accessible from your cluster and specify the image location in the `spec.param.vmContainerDiskImage` attribute of the DPDK checkup config map. To build a container disk image, you must create an image builder virtual machine (VM). The _image builder VM_ is a {op-system-base} 9 VM that can be used to build custom {op-system-base} images. @@ -163,4 +166,4 @@ $ podman build . -t dpdk-rhel:latest $ podman push dpdk-rhel:latest ---- -. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the DPDK checkup config map. \ No newline at end of file +. Provide a link to the container disk image in the `spec.param.vmUnderTestContainerDiskImage` attribute in the DPDK checkup config map. diff --git a/modules/virt-canceling-vm-migration-cli.adoc b/modules/virt-canceling-vm-migration-cli.adoc index 2af604b0de16..8c1fc9f094c7 100644 --- a/modules/virt-canceling-vm-migration-cli.adoc +++ b/modules/virt-canceling-vm-migration-cli.adoc @@ -6,6 +6,7 @@ [id="virt-canceling-vm-migration-cli_{context}"] = Canceling live migration by using the CLI +[role="_abstract"] Cancel the live migration of a virtual machine by deleting the `VirtualMachineInstanceMigration` object associated with the migration. @@ -19,7 +20,6 @@ Cancel the live migration of a virtual machine by deleting the * Delete the `VirtualMachineInstanceMigration` object that triggered the live migration, `migration-job` in this example: + - [source,terminal] ---- $ oc delete vmim migration-job diff --git a/modules/virt-canceling-vm-migration-web.adoc b/modules/virt-canceling-vm-migration-web.adoc index da819b6043f5..b4d12d23ef17 100644 --- a/modules/virt-canceling-vm-migration-web.adoc +++ b/modules/virt-canceling-vm-migration-web.adoc @@ -6,6 +6,7 @@ [id="virt-canceling-vm-migration-web_{context}"] = Canceling live migration by using the web console +[role="_abstract"] You can cancel the live migration of a virtual machine (VM) by using the {product-title} web console. .Prerequisites diff --git a/modules/virt-cdi-supported-operations-matrix.adoc b/modules/virt-cdi-supported-operations-matrix.adoc index 9db52fd486d3..f2fdfde74beb 100644 --- a/modules/virt-cdi-supported-operations-matrix.adoc +++ b/modules/virt-cdi-supported-operations-matrix.adoc @@ -14,58 +14,77 @@ [id="virt-cdi-supported-operations-matrix_{context}"] = CDI supported operations matrix +[role="_abstract"] This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space. |=== |Content types | HTTP | HTTPS | HTTP basic auth | Registry | Upload | KubeVirt (QCOW2) -|✓ QCOW2 + -✓ GZ* + +a|✓ QCOW2 + +✓ GZ* + ✓ XZ* -|✓ QCOW2** + -✓ GZ* + +a|✓ QCOW2** + +✓ GZ* + ✓ XZ* -|✓ QCOW2 + -✓ GZ* + +a|✓ QCOW2 + +✓ GZ* + ✓ XZ* -| ✓ QCOW2* + -□ GZ + +a| ✓ QCOW2* + +□ GZ + □ XZ -| ✓ QCOW2* + -✓ GZ* + +a| ✓ QCOW2* + +✓ GZ* + ✓ XZ* | KubeVirt (RAW) -|✓ RAW + -✓ GZ + +a|✓ RAW + +✓ GZ + ✓ XZ -|✓ RAW + -✓ GZ + +a|✓ RAW + +✓ GZ + ✓ XZ -| ✓ RAW + -✓ GZ + +a| ✓ RAW + +✓ GZ + ✓ XZ -| ✓ RAW* + -□ GZ + -□ XZ +a| ✓ RAW* -| ✓ RAW* + -✓ GZ* + -✓ XZ* -|=== +□ GZ + +□ XZ -✓ Supported operation +a| ✓ RAW* -□ Unsupported operation +✓ GZ* -$$*$$ Requires scratch space +✓ XZ* +|=== -$$**$$ Requires scratch space if a custom certificate authority is required +[horizontal] +✓:: Supported operation +□:: Unsupported operation +$$*$$:: Requires scratch space +$$**$$:: Requires scratch space if a custom certificate authority is required diff --git a/modules/virt-change-vm-instance-type-cli.adoc b/modules/virt-change-vm-instance-type-cli.adoc index c654aff9f237..e8fb81c344c7 100644 --- a/modules/virt-change-vm-instance-type-cli.adoc +++ b/modules/virt-change-vm-instance-type-cli.adoc @@ -34,7 +34,7 @@ $ oc patch vm/ --type merge -p '{"spec":{"instancetype":{"name": " -o json | jq .status.instancetypeRef ---- + -*Example output* +Example output: + [source,terminal] ---- @@ -54,7 +54,7 @@ $ oc get vms/ -o json | jq .status.instancetypeRef $ oc get vmi/ -o json | jq .spec.domain.cpu ---- + -*Example output that verifies that the revision uses 2 vCPUs* +Example output that verifies that the revision uses 2 vCPUs: + [source,terminal] ---- diff --git a/modules/virt-changing-update-settings.adoc b/modules/virt-changing-update-settings.adoc index a4eeef88c704..fdf443bfeca9 100644 --- a/modules/virt-changing-update-settings.adoc +++ b/modules/virt-changing-update-settings.adoc @@ -6,6 +6,7 @@ [id="virt-changing-update-settings_{context}"] = Changing update settings +[role="_abstract"] You can change the update channel and approval strategy for your {VirtProductName} Operator subscription by using the web console. .Prerequisites diff --git a/modules/virt-checking-cluster-dpdk-readiness.adoc b/modules/virt-checking-cluster-dpdk-readiness.adoc index a9bac8133cf0..8b92ee16e17e 100644 --- a/modules/virt-checking-cluster-dpdk-readiness.adoc +++ b/modules/virt-checking-cluster-dpdk-readiness.adoc @@ -6,6 +6,7 @@ [id="virt-checking-cluster-dpdk-readiness_{context}"] = Running a DPDK checkup by using the CLI +[role="_abstract"] Use a predefined checkup to verify that your {product-title} cluster node can run a virtual machine (VM) with a Data Plane Development Kit (DPDK) workload with zero packet loss. The DPDK checkup runs traffic between a traffic generator and a VM running a test DPDK application. You run a DPDK checkup by performing the following steps: @@ -24,11 +25,11 @@ You run a DPDK checkup by performing the following steps: .Procedure -. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest for the DPDK checkup: +. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest for the DPDK checkup. ++ +Example service account, role, and rolebinding manifest file: + -.Example service account, role, and rolebinding manifest file [%collapsible] -==== [source,yaml] ---- --- @@ -85,7 +86,6 @@ roleRef: kind: Role name: kubevirt-dpdk-checker ---- -==== . Apply the `ServiceAccount`, `Role`, and `RoleBinding` manifest: + @@ -94,9 +94,10 @@ roleRef: $ oc apply -n -f .yaml ---- -. Create a `ConfigMap` manifest that contains the input parameters for the checkup: +. Create a `ConfigMap` manifest that contains the input parameters for the checkup. ++ +Example input config map: + -.Example input config map [source,yaml] ---- apiVersion: v1 @@ -122,9 +123,10 @@ data: $ oc apply -n -f .yaml ---- -. Create a `Job` manifest to run the checkup: +. Create a `Job` manifest to run the checkup. ++ +Example job manifest: + -.Example job manifest [source,yaml,subs="attributes+"] ---- apiVersion: batch/v1 @@ -182,7 +184,8 @@ $ oc wait job dpdk-checkup -n --for condition=complete --time $ oc get configmap dpdk-checkup-config -n -o yaml ---- + -.Example output config map (success) +Example output config map (success): ++ [source,yaml] ---- apiVersion: v1 diff --git a/modules/virt-checking-storage-configuration.adoc b/modules/virt-checking-storage-configuration.adoc index 1ccfe90d705b..aaa0702f04fb 100644 --- a/modules/virt-checking-storage-configuration.adoc +++ b/modules/virt-checking-storage-configuration.adoc @@ -6,6 +6,7 @@ [id="virt-checking-storage-configuration_{context}"] = Running a storage checkup by using the CLI +[role="_abstract"] Use a predefined checkup to verify that the {product-title} cluster storage is configured optimally to run {VirtProductName} workloads. .Prerequisites @@ -32,11 +33,11 @@ subjects: .Procedure -. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest file for the storage checkup: +. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest file for the storage checkup. ++ +Example service account, role, and rolebinding manifest: + -.Example service account, role, and rolebinding manifest [%collapsible] -==== [source,yaml] ---- --- @@ -84,7 +85,6 @@ roleRef: kind: Role name: storage-checkup-role ---- -==== . Apply the `ServiceAccount`, `Role`, and `RoleBinding` manifest in the target namespace: + @@ -95,7 +95,8 @@ $ oc apply -n -f .yaml . Create a `ConfigMap` and `Job` manifest file. The config map contains the input parameters for the checkup job. + -.Example input config map and job manifest +Example input config map and job manifest: ++ [source,yaml,subs="attributes+"] ---- --- @@ -152,7 +153,8 @@ $ oc wait job storage-checkup -n --for condition=complete --t $ oc get configmap storage-checkup-config -n -o yaml ---- + -.Example output config map (success) +Example output config map (success): ++ [source,yaml,subs="attributes+"] ---- apiVersion: v1 diff --git a/modules/virt-cloning-a-datavolume.adoc b/modules/virt-cloning-a-datavolume.adoc index c776c4f1ffeb..4062f12c7daa 100644 --- a/modules/virt-cloning-a-datavolume.adoc +++ b/modules/virt-cloning-a-datavolume.adoc @@ -6,6 +6,7 @@ [id="virt-cloning-a-datavolume_{context}"] = Smart-cloning a PVC by using the CLI +[role="_abstract"] You can smart-clone a persistent volume claim (PVC) by using the command line to create a `DataVolume` object. .Prerequisites @@ -15,7 +16,8 @@ You can smart-clone a persistent volume claim (PVC) by using the command line to * The source and target PVCs must have the same storage provider and volume mode. * The value of the `driver` key of the `VolumeSnapshotClass` object must match the value of the `provisioner` key of the `StorageClass` object as shown in the following example: + -.Example `VolumeSnapshotClass` object +Example `VolumeSnapshotClass` object: ++ [source,yaml] ---- kind: VolumeSnapshotClass @@ -24,7 +26,8 @@ driver: openshift-storage.rbd.csi.ceph.com # ... ---- + -.Example `StorageClass` object +Example `StorageClass` object: ++ [source,yaml] ---- kind: StorageClass diff --git a/modules/virt-cloning-pvc-of-vm-disk-into-new-datavolume.adoc b/modules/virt-cloning-pvc-of-vm-disk-into-new-datavolume.adoc index 5935c4b3f350..9a96b340a899 100644 --- a/modules/virt-cloning-pvc-of-vm-disk-into-new-datavolume.adoc +++ b/modules/virt-cloning-pvc-of-vm-disk-into-new-datavolume.adoc @@ -7,6 +7,7 @@ [id="virt-cloning-pvc-of-vm-disk-into-new-datavolume_{context}"] = Cloning the PVC of a VM disk into a new data volume +[role="_abstract"] You can clone the persistent volume claim (PVC) of an existing (virtual machine) VM disk into a new data volume. The new data volume can then be used for a new virtual machine. diff --git a/modules/virt-cloning-pvc-to-dv-cli.adoc b/modules/virt-cloning-pvc-to-dv-cli.adoc index b2a8e6d8d30a..2ad9822d6194 100644 --- a/modules/virt-cloning-pvc-to-dv-cli.adoc +++ b/modules/virt-cloning-pvc-to-dv-cli.adoc @@ -6,6 +6,7 @@ [id="virt-cloning-pvc-to-dv-cli_{context}"] = Cloning a PVC to a data volume +[role="_abstract"] You can clone the persistent volume claim (PVC) of an existing virtual machine (VM) disk to a data volume by using the command line. You create a data volume that references the original source PVC. The lifecycle of the new data volume is independent of the original VM. Deleting the original VM does not affect the new data volume or its associated PVC. @@ -31,7 +32,8 @@ endif::openshift-rosa,openshift-dedicated[] ** The source and target PVCs must have the same storage provider and volume mode. ** The value of the `driver` key of the `VolumeSnapshotClass` object must match the value of the `provisioner` key of the `StorageClass` object as shown in the following example: + -.Example `VolumeSnapshotClass` object +Example `VolumeSnapshotClass` object: ++ [source,yaml] ---- kind: VolumeSnapshotClass @@ -40,7 +42,8 @@ driver: openshift-storage.rbd.csi.ceph.com # ... ---- + -.Example `StorageClass` object +Example `StorageClass` object: ++ [source,yaml] ---- kind: StorageClass diff --git a/modules/virt-cloning-vm-web.adoc b/modules/virt-cloning-vm-web.adoc index b793576b4a0a..4ec22d50f031 100644 --- a/modules/virt-cloning-vm-web.adoc +++ b/modules/virt-cloning-vm-web.adoc @@ -6,6 +6,7 @@ [id="virt-cloning-vm-snapshot_{context}"] = Cloning a VM by using the web console +[role="_abstract"] You can clone an existing VM by using the web console. .Procedure diff --git a/modules/virt-cluster-resource-requirements.adoc b/modules/virt-cluster-resource-requirements.adoc index 5a090c86f7b6..df4c91f97dd9 100644 --- a/modules/virt-cluster-resource-requirements.adoc +++ b/modules/virt-cluster-resource-requirements.adoc @@ -6,33 +6,35 @@ [id="virt-cluster-resource-requirements_{context}"] = Physical resource overhead requirements -{VirtProductName} is an add-on to {product-title} and imposes additional overhead that you must account for when planning a cluster. Each cluster machine must accommodate the following overhead requirements in addition to the {product-title} requirements. Oversubscribing the physical resources in a cluster can affect performance. +[role="_abstract"] +{VirtProductName} is an add-on to {product-title} and imposes additional overhead that you must account for when planning a cluster. + +Each cluster machine must accommodate the following overhead requirements in addition to the {product-title} requirements. Oversubscribing the physical resources in a cluster can affect performance. [IMPORTANT] ==== The numbers noted in this documentation are based on Red Hat's test methodology and setup. These numbers can vary based on your own individual setup and environments. ==== -[discrete] [id="memory-overhead_{context}"] == Memory overhead Calculate the memory overhead values for {VirtProductName} by using the equations below. -.Cluster memory overhead - +Cluster memory overhead:: ++ ---- Memory overhead per infrastructure node ≈ 150 MiB ---- - ++ ---- Memory overhead per worker node ≈ 360 MiB ---- - ++ Additionally, {VirtProductName} environment resources require a total of 2179 MiB of RAM that is spread across all infrastructure nodes. -.Virtual machine memory overhead - +Virtual machine memory overhead:: ++ ---- Memory overhead per virtual machine ≈ (0.002 × requested memory) \ + 218 MiB \ <1> @@ -48,48 +50,46 @@ Memory overhead per virtual machine ≈ (0.002 × requested memory) \ * If Secure Encrypted Virtualization (SEV) is enabled, add 256 MiB. * If Trusted Platform Module (TPM) is enabled, add 53 MiB. -[discrete] [id="CPU-overhead_{context}"] == CPU overhead Calculate the cluster processor overhead requirements for {VirtProductName} by using the equation below. The CPU overhead per virtual machine depends on your individual setup. -.Cluster CPU overhead - +Cluster CPU overhead:: ++ ---- CPU overhead for infrastructure nodes ≈ 4 cores ---- - ++ {VirtProductName} increases the overall utilization of cluster level services such as logging, routing, and monitoring. To account for this workload, ensure that nodes that host infrastructure components have capacity allocated for 4 additional cores (4000 millicores) distributed across those nodes. - ++ ---- CPU overhead for worker nodes ≈ 2 cores + CPU overhead per virtual machine ---- - ++ Each worker node that hosts virtual machines must have capacity for 2 additional cores (2000 millicores) for {VirtProductName} management workloads in addition to the CPUs required for virtual machine workloads. -.Virtual machine CPU overhead - +Virtual machine CPU overhead:: ++ If dedicated CPUs are requested, there is a 1:1 impact on the cluster CPU overhead requirement. Otherwise, there are no specific rules about how many CPUs a virtual machine requires. -[discrete] [id="storage-overhead_{context}"] == Storage overhead Use the guidelines below to estimate storage overhead requirements for your {VirtProductName} environment. -.Cluster storage overhead - +Cluster storage overhead:: ++ ---- Aggregated storage overhead per node ≈ 10 GiB ---- - ++ 10 GiB is the estimated on-disk storage impact for each node in the cluster when you install {VirtProductName}. -.Virtual machine storage overhead - +Virtual machine storage overhead:: ++ Storage overhead per virtual machine depends on specific requests for resource allocation within the virtual machine. The request could be for ephemeral storage on the node or storage resources hosted elsewhere in the cluster. {VirtProductName} does not currently allocate any additional ephemeral storage for the running container itself. -.Example - +Example:: ++ As a cluster administrator, if you plan to host 10 virtual machines in the cluster, each with 1 GiB of RAM and 2 vCPUs, the memory impact across the cluster is 11.68 GiB. The estimated on-disk storage impact for each node in the cluster is 10 GiB and the CPU impact for worker nodes that host virtual machine workloads is a minimum of 2 cores. diff --git a/modules/virt-cluster-role-VNC.adoc b/modules/virt-cluster-role-VNC.adoc index 501bb10a072e..fb00b52570af 100644 --- a/modules/virt-cluster-role-VNC.adoc +++ b/modules/virt-cluster-role-VNC.adoc @@ -6,6 +6,7 @@ [id="virt-cluster-role-VNC_{context}"] = Granting token generation permission for the VNC console by using the cluster role +[role="_abstract"] As a cluster administrator, you can install a cluster role and bind it to a user or service account to allow access to the endpoint that generates tokens for the VNC console. .Procedure @@ -24,4 +25,4 @@ $ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevir [source,terminal] ---- $ kubectl create rolebinding "${ROLE_BINDING_NAME}" --clusterrole="token.kubevirt.io:generate" --serviceaccount="${SERVICE_ACCOUNT_NAME}" ----- \ No newline at end of file +---- diff --git a/modules/virt-common-error-messages.adoc b/modules/virt-common-error-messages.adoc index 39899c905235..7caa97b95585 100644 --- a/modules/virt-common-error-messages.adoc +++ b/modules/virt-common-error-messages.adoc @@ -6,6 +6,7 @@ [id="virt-common-error-messages_{context}"] = Common error messages -The following error messages might appear in {VirtProductName} logs: +[role="_abstract"] +The following error messages might appear in {VirtProductName} logs. -`ErrImagePull` or `ImagePullBackOff`:: Indicates an incorrect deployment configuration or problems with the images that are referenced. \ No newline at end of file +`ErrImagePull` or `ImagePullBackOff`:: Indicates an incorrect deployment configuration or problems with the images that are referenced. diff --git a/modules/virt-common-instancetypes.adoc b/modules/virt-common-instancetypes.adoc index 93f6f61f2636..5566a2b9a008 100644 --- a/modules/virt-common-instancetypes.adoc +++ b/modules/virt-common-instancetypes.adoc @@ -6,6 +6,7 @@ [id="virt-common-instancetypes_{context}"] = Pre-defined instance types +[role="_abstract"] {VirtProductName} includes a set of pre-defined instance types called `common-instancetypes`. Some are specialized for specific workloads and others are workload-agnostic. These instance type resources are named according to their series, version, and size. The size value follows the `.` delimiter and ranges from `nano` to `8xlarge`. @@ -68,4 +69,4 @@ a| .^a|`m1.large`:: * 2 vCPUs * 16GiB Memory -|=== \ No newline at end of file +|=== diff --git a/modules/virt-configure-ksm-cli.adoc b/modules/virt-configure-ksm-cli.adoc index 1e44ebebb7a7..69e11b430073 100644 --- a/modules/virt-configure-ksm-cli.adoc +++ b/modules/virt-configure-ksm-cli.adoc @@ -7,6 +7,7 @@ [id="virt-configure-ksm-cli_{context}"] = Configuring KSM activation by using the CLI +[role="_abstract"] You can enable or disable {VirtProductName}'s kernel samepage merging (KSM) activation feature by editing the `HyperConverged` custom resource (CR). Use this method if you want {VirtProductName} to activate KSM on only a subset of nodes. .Prerequisites diff --git a/modules/virt-configure-ksm-web.adoc b/modules/virt-configure-ksm-web.adoc index 9b7254f62f20..c76b065e6f02 100644 --- a/modules/virt-configure-ksm-web.adoc +++ b/modules/virt-configure-ksm-web.adoc @@ -7,6 +7,7 @@ [id="virt-configure-ksm-web_{context}"] = Configuring KSM activation by using the web console +[role="_abstract"] You can allow {VirtProductName} to activate kernel samepage merging (KSM) on all nodes in your cluster by using the {product-title} web console. .Procedure diff --git a/modules/virt-configure-multiple-iothreads.adoc b/modules/virt-configure-multiple-iothreads.adoc index 8a418ea6bfbe..920eb6ab37ea 100644 --- a/modules/virt-configure-multiple-iothreads.adoc +++ b/modules/virt-configure-multiple-iothreads.adoc @@ -4,8 +4,9 @@ :_mod-docs-content-type: PROCEDURE [id="virt-configure-multiple-iothreads_{context}"] -== Configuring multiple IOThreads for fast storage access += Configuring multiple IOThreads for fast storage access +[role="_abstract"] You can improve storage performance by configuring multiple IOThreads for a virtual machine (VM) that uses fast storage, such as solid-state drive (SSD) or non-volatile memory express (NVMe). This configuration option is only available by editing YAML of the VM. [NOTE] @@ -39,7 +40,7 @@ domain: ---- . Click *Save*. - ++ [IMPORTANT] ==== The `spec.template.spec.domain` setting cannot be changed while the VM is running. You must stop the VM before applying the changes, and then restart the VM for the new settings to take effect. diff --git a/modules/virt-configuring-a-live-migration-policy.adoc b/modules/virt-configuring-a-live-migration-policy.adoc index b8c040d3f6fe..977d4ceb808d 100644 --- a/modules/virt-configuring-a-live-migration-policy.adoc +++ b/modules/virt-configuring-a-live-migration-policy.adoc @@ -6,7 +6,10 @@ [id="virt-configuring-a-live-migration-policy_{context}"] = Creating a live migration policy by using the CLI -You can create a live migration policy by using the command line. KubeVirt applies the live migration policy to selected virtual machines (VMs) by using any combination of labels: +[role="_abstract"] +You can create a live migration policy by using the command line. + +KubeVirt applies the live migration policy to selected virtual machines (VMs) by using any combination of labels: * VM labels such as `size`, `os`, or `gpu` * Project labels such as `priority`, `bandwidth`, or `hpc-workload` diff --git a/modules/virt-configuring-aaq-operator.adoc b/modules/virt-configuring-aaq-operator.adoc index 63dcfa0eee64..92980acdf79c 100644 --- a/modules/virt-configuring-aaq-operator.adoc +++ b/modules/virt-configuring-aaq-operator.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-aaq-operator_{context}"] = Configuring the AAQ Operator by using the CLI +[role="_abstract"] You can configure the AAQ Operator by specifying the fields of the `spec.applicationAwareConfig` object in the `HyperConverged` custom resource (CR). .Prerequisites @@ -42,4 +43,4 @@ where: * `DedicatedVirtualResources` (default): Similar to `VirtualResources`, but separates resource tracking for pods associated with VMs by adding a `/vmi` suffix to CPU and memory resource names. For example, `requests.cpu/vmi` and `requests.memory/vmi`. -- `namespaceSelector`:: Determines the namespaces for which an AAQ scheduling gate is added to pods when they are created. If a namespace selector is not defined, the AAQ Operator targets namespaces with the `application-aware-quota/enable-gating` label as default. -`allowApplicationAwareClusterResourceQuota`:: If set to `true`, you can create and manage the `ApplicationAwareClusterResourceQuota` object. Setting this attribute to `true` can increase scheduling time. \ No newline at end of file +`allowApplicationAwareClusterResourceQuota`:: If set to `true`, you can create and manage the `ApplicationAwareClusterResourceQuota` object. Setting this attribute to `true` can increase scheduling time. diff --git a/modules/virt-configuring-certificate-rotation.adoc b/modules/virt-configuring-certificate-rotation.adoc index cbf176218a47..ef7c550d3b4c 100644 --- a/modules/virt-configuring-certificate-rotation.adoc +++ b/modules/virt-configuring-certificate-rotation.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-certificate-rotation_{context}"] = Configuring certificate rotation +[role="_abstract"] You can do this during {VirtProductName} installation in the web console or after installation in the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-configuring-cluster-dpdk.adoc b/modules/virt-configuring-cluster-dpdk.adoc index 911574758f8d..1ba96acf7385 100644 --- a/modules/virt-configuring-cluster-dpdk.adoc +++ b/modules/virt-configuring-cluster-dpdk.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-cluster-dpdk_{context}"] = Configuring a cluster for DPDK workloads +[role="_abstract"] You can configure an {product-title} cluster to run Data Plane Development Kit (DPDK) workloads for improved network performance. .Prerequisites @@ -35,9 +36,10 @@ $ rosa edit machinepool --cluster= node-role.kube ---- endif::openshift-rosa[] -.. Create a new `MachineConfigPool` manifest that contains the `worker-dpdk` label in the `spec.machineConfigSelector` object: +.. Create a new `MachineConfigPool` manifest that contains the `worker-dpdk` label in the `spec.machineConfigSelector` object. ++ +Example `MachineConfigPool` manifest: + -.Example `MachineConfigPool` manifest [source,yaml] ---- apiVersion: machineconfiguration.openshift.io/v1 @@ -61,7 +63,8 @@ spec: . Create a `PerformanceProfile` manifest that applies to the labeled nodes and the machine config pool that you created in the previous steps. The performance profile specifies the CPUs that are isolated for DPDK applications and the CPUs that are reserved for house keeping. + -.Example `PerformanceProfile` manifest +Example `PerformanceProfile` manifest: ++ [source,yaml] ---- apiVersion: performance.openshift.io/v2 @@ -126,9 +129,10 @@ Enabling `AlignCPUs` allows {VirtProductName} to request up to two additional de emulator thread isolation. ==== -. Create an `SriovNetworkNodePolicy` object with the `spec.deviceType` field set to `vfio-pci`: +. Create an `SriovNetworkNodePolicy` object with the `spec.deviceType` field set to `vfio-pci`. ++ +Example `SriovNetworkNodePolicy` manifest: + -.Example `SriovNetworkNodePolicy` manifest [source,yaml] ---- apiVersion: sriovnetwork.openshift.io/v1 diff --git a/modules/virt-configuring-cluster-eviction-strategy-cli.adoc b/modules/virt-configuring-cluster-eviction-strategy-cli.adoc index 57d3099db496..1237ff9f0b63 100644 --- a/modules/virt-configuring-cluster-eviction-strategy-cli.adoc +++ b/modules/virt-configuring-cluster-eviction-strategy-cli.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-cluster-eviction-strategy-cli_{context}"] = Configuring a cluster eviction strategy by using the CLI +[role="_abstract"] You can configure an eviction strategy for a cluster by using the command line. .Prerequisites @@ -23,7 +24,8 @@ $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} . Set the cluster eviction strategy as shown in the following example: + -.Example cluster eviction strategy +Example cluster eviction strategy: ++ [source,yaml] ---- apiVersion: hco.kubevirt.io/v1beta1 diff --git a/modules/virt-configuring-cluster-real-time.adoc b/modules/virt-configuring-cluster-real-time.adoc index 99b3a3e84147..e23baa8fb3d1 100644 --- a/modules/virt-configuring-cluster-real-time.adoc +++ b/modules/virt-configuring-cluster-real-time.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-cluster-real-time_{context}"] = Configuring a cluster for real-time workloads +[role="_abstract"] You can configure an {product-title} cluster to run real-time workloads. .Prerequisites @@ -27,9 +28,10 @@ $ oc label node node-role.kubernetes.io/worker-realtime="" You must use the default `master` role for {sno} and compact clusters. ==== -. Create a new `MachineConfigPool` manifest that contains the `worker-realtime` label in the `spec.machineConfigSelector` object: +. Create a new `MachineConfigPool` manifest that contains the `worker-realtime` label in the `spec.machineConfigSelector` object. ++ +Example `MachineConfigPool` manifest: + -.Example `MachineConfigPool` manifest [source,yaml] ---- apiVersion: machineconfiguration.openshift.io/v1 @@ -63,9 +65,10 @@ You do not need to create a new `MachineConfigPool` manifest for {sno} and compa $ oc apply -f .yaml ---- -. Create a `PerformanceProfile` manifest that applies to the labeled nodes and the machine config pool that you created in the previous steps: +. Create a `PerformanceProfile` manifest that applies to the labeled nodes and the machine config pool that you created in the previous steps. ++ +Example `PerformanceProfile` manifest: + -.Example `PerformanceProfile` manifest [source,yaml] ---- apiVersion: performance.openshift.io/v2 @@ -137,4 +140,4 @@ $ oc patch hyperconverged kubevirt-hyperconverged -n {CNVNamespace} \ ==== Enabling `alignCPUs` allows {VirtProductName} to request up to two additional dedicated CPUs to bring the total CPU count to an even parity when using emulator thread isolation. -==== \ No newline at end of file +==== diff --git a/modules/virt-configuring-default-and-virt-default-storage-class.adoc b/modules/virt-configuring-default-and-virt-default-storage-class.adoc index d51e61bb1ead..2c55c1773d20 100644 --- a/modules/virt-configuring-default-and-virt-default-storage-class.adoc +++ b/modules/virt-configuring-default-and-virt-default-storage-class.adoc @@ -7,7 +7,10 @@ [id="virt-configuring-default-and-virt-default-storage-class_{context}"] = Configuring the default and virt-default storage classes -A storage class determines how persistent storage is provisioned for workloads. In {VirtProductName}, the virt-default storage class takes precedence over the cluster default storage class and is used specifically for virtualization workloads. Only one storage class should be set as virt-default or cluster default at a time. If multiple storage classes are marked as default, the virt-default storage class overrides the cluster default. To ensure consistent behavior, configure only one storage class as the default for virtualization workloads. +[role="_abstract"] +A storage class determines how persistent storage is provisioned for workloads. In {VirtProductName}, the virt-default storage class takes precedence over the cluster default storage class and is used specifically for virtualization workloads. + +Only one storage class should be set as virt-default or cluster default at a time. If multiple storage classes are marked as default, the virt-default storage class overrides the cluster default. To ensure consistent behavior, configure only one storage class as the default for virtualization workloads. [IMPORTANT] ==== diff --git a/modules/virt-configuring-default-cpu-model.adoc b/modules/virt-configuring-default-cpu-model.adoc index c9bdf2b658a6..eb1169a5b318 100644 --- a/modules/virt-configuring-default-cpu-model.adoc +++ b/modules/virt-configuring-default-cpu-model.adoc @@ -6,7 +6,8 @@ [id="virt-configuring-default-cpu-model_{context}"] = Configuring the default CPU model -Configure the `defaultCPUModel` by updating the `HyperConverged` custom resource (CR). You can change the `defaultCPUModel` while {VirtProductName} is running. +[role="_abstract"] +You can configure the `defaultCPUModel` by updating the `HyperConverged` custom resource (CR). You can change the `defaultCPUModel` while {VirtProductName} is running. [NOTE] ==== @@ -40,4 +41,4 @@ spec: defaultCPUModel: "EPYC" ---- -. Apply the YAML file to your cluster. \ No newline at end of file +. Apply the YAML file to your cluster. diff --git a/modules/virt-configuring-descheduler-evictions.adoc b/modules/virt-configuring-descheduler-evictions.adoc index 2521b6f00c01..d70420539839 100644 --- a/modules/virt-configuring-descheduler-evictions.adoc +++ b/modules/virt-configuring-descheduler-evictions.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-descheduler-evictions_{context}"] = Configuring descheduler evictions for virtual machines +[role="_abstract"] After the descheduler is installed and configured, all migratable virtual machines (VMs) are eligible for eviction by default. You can configure the descheduler to manage VM evictions across the cluster and optionally exclude specific VMs from eviction. .Prerequisites @@ -73,4 +74,6 @@ spec: . Start the VM. +.Result + The VM is now configured according to the descheduler settings. diff --git a/modules/virt-configuring-disk-sharing-lun-cli.adoc b/modules/virt-configuring-disk-sharing-lun-cli.adoc index 1fe4f3243b11..cbbc1c4e05fa 100644 --- a/modules/virt-configuring-disk-sharing-lun-cli.adoc +++ b/modules/virt-configuring-disk-sharing-lun-cli.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-disk-sharing-lun-cli_{context}"] = Configuring disk sharing by using LUN and the CLI +[role="_abstract"] You can use the command line to configure disk sharing by using LUN. .Procedure @@ -44,4 +45,4 @@ spec: <1> Identifies a LUN disk. <2> Identifies that the persistent reservation is enabled. -. Save the `VirtualMachine` manifest file to apply your changes. \ No newline at end of file +. Save the `VirtualMachine` manifest file to apply your changes. diff --git a/modules/virt-configuring-disk-sharing-lun-web.adoc b/modules/virt-configuring-disk-sharing-lun-web.adoc index 3738027d531b..d8a2e14ca5aa 100644 --- a/modules/virt-configuring-disk-sharing-lun-web.adoc +++ b/modules/virt-configuring-disk-sharing-lun-web.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-disk-sharing-lun-web_{context}"] = Configuring disk sharing by using LUN and the web console +[role="_abstract"] You can use the {product-title} web console to configure disk sharing by using LUN. .Prerequisites diff --git a/modules/virt-configuring-disk-sharing-lun.adoc b/modules/virt-configuring-disk-sharing-lun.adoc index 8d1129d240e5..2bfd23890e79 100644 --- a/modules/virt-configuring-disk-sharing-lun.adoc +++ b/modules/virt-configuring-disk-sharing-lun.adoc @@ -6,7 +6,10 @@ [id="virt-configuring-disk-sharing-lun_{context}"] = Configuring disk sharing by using LUN -To secure data on your VM from outside access, you can enable SCSI persistent reservation and configure a LUN-backed virtual machine disk to be shared among multiple virtual machines. By enabling the shared option, you can use advanced SCSI commands, such as those required for a Windows failover clustering implementation, for managing the underlying storage. +[role="_abstract"] +To secure data on your VM from outside access, you can enable SCSI persistent reservation and configure a LUN-backed virtual machine disk to be shared among multiple virtual machines. + +By enabling the shared option, you can use advanced SCSI commands, such as those required for a Windows failover clustering implementation, for managing the underlying storage. When a storage volume is configured as the `LUN` disk type, a VM can use the volume as a logical unit number (LUN) device. As a result, the VM can deploy and manage the disk by using SCSI commands. @@ -75,4 +78,4 @@ spec: <2> Identifies a LUN disk. <3> Identifies that the persistent reservation is enabled. -. Save the `VirtualMachine` manifest file to apply your changes. \ No newline at end of file +. Save the `VirtualMachine` manifest file to apply your changes. diff --git a/modules/virt-configuring-downward-metrics.adoc b/modules/virt-configuring-downward-metrics.adoc index 61278773bde0..a0a419bcd6cb 100644 --- a/modules/virt-configuring-downward-metrics.adoc +++ b/modules/virt-configuring-downward-metrics.adoc @@ -6,7 +6,8 @@ [id="virt-configuring-downward-metrics_{context}"] = Configuring a downward metrics device -You enable the capturing of downward metrics for a host VM by creating a configuration file that includes a `downwardMetrics` device. Adding this device establishes that the metrics are exposed through a `virtio-serial` port. +[role="_abstract"] +You can enable the capturing of downward metrics for a host VM by creating a configuration file that includes a `downwardMetrics` device. Adding this device establishes that the metrics are exposed through a `virtio-serial` port. .Prerequisites @@ -16,7 +17,6 @@ You enable the capturing of downward metrics for a host VM by creating a configu * Edit or create a YAML file that includes a `downwardMetrics` device, as shown in the following example: + -.Example downwardMetrics configuration file [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-configuring-highburst-profile.adoc b/modules/virt-configuring-highburst-profile.adoc index 7430c0da3852..3d2a1b44ae11 100644 --- a/modules/virt-configuring-highburst-profile.adoc +++ b/modules/virt-configuring-highburst-profile.adoc @@ -7,7 +7,8 @@ [id="virt-configuring-highburst-profile_{context}"] = Configuring a highBurst profile -Use the `highBurst` profile to create and maintain a large number of virtual machines (VMs) in one cluster. +[role="_abstract"] +You can use the `highBurst` profile to create and maintain a large number of virtual machines (VMs) in one cluster. .Prerequisites diff --git a/modules/virt-configuring-huge-pages-for-vms.adoc b/modules/virt-configuring-huge-pages-for-vms.adoc index 2c06ecd197e8..43b994975ddb 100644 --- a/modules/virt-configuring-huge-pages-for-vms.adoc +++ b/modules/virt-configuring-huge-pages-for-vms.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-huge-pages-for-vms_{context}"] = Configuring huge pages for virtual machines +[role="_abstract"] You can configure virtual machines to use pre-allocated huge pages by including the `memory.hugepages.pageSize` and `resources.requests.memory` parameters in your virtual machine configuration. diff --git a/modules/virt-configuring-interface-link-state-web.adoc b/modules/virt-configuring-interface-link-state-web.adoc index 740d42360168..cbf6a1e567dc 100644 --- a/modules/virt-configuring-interface-link-state-web.adoc +++ b/modules/virt-configuring-interface-link-state-web.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-interface-link-state-web_{context}"] = Setting the VM interface link state by using the web console +[role="_abstract"] You can set the link state of a primary or secondary virtual machine (VM) network interface by using the web console. .Prerequisites @@ -22,4 +23,4 @@ You can set the link state of a primary or secondary virtual machine (VM) networ . Choose the appropriate option to set the interface link state: ** If the current interface link state is `up`, select *Set link down*. -** If the current interface link state is `down`, select *Set link up*. \ No newline at end of file +** If the current interface link state is `down`, select *Set link up*. diff --git a/modules/virt-configuring-interface-link-state.adoc b/modules/virt-configuring-interface-link-state.adoc index 33079c899300..27bb875ad853 100644 --- a/modules/virt-configuring-interface-link-state.adoc +++ b/modules/virt-configuring-interface-link-state.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-interface-link-state_{context}"] = Setting the VM interface link state by using the CLI +[role="_abstract"] You can set the link state of a primary or secondary virtual machine (VM) network interface by using the CLI. .Prerequisites @@ -62,7 +63,8 @@ $ oc apply -f .yaml $ oc get vmi ---- + -.Example output +Example output: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-configuring-ip-vm-cli.adoc b/modules/virt-configuring-ip-vm-cli.adoc index 7018b7310e9b..dc4788b13806 100644 --- a/modules/virt-configuring-ip-vm-cli.adoc +++ b/modules/virt-configuring-ip-vm-cli.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-ip-vm-cli_{context}"] = Configuring an IP address when creating a virtual machine by using the CLI +[role="_abstract"] You can configure a static or dynamic IP address when you create a virtual machine (VM). The IP address is provisioned with cloud-init. [NOTE] diff --git a/modules/virt-configuring-ip-vm-web.adoc b/modules/virt-configuring-ip-vm-web.adoc index 13aaec5ec7eb..0ca6111cb394 100644 --- a/modules/virt-configuring-ip-vm-web.adoc +++ b/modules/virt-configuring-ip-vm-web.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-ip-vm-web_{context}"] = Configuring a static IP address when creating a virtual machine by using the web console +[role="_abstract"] You can configure a static IP address when you create a virtual machine (VM) by using the web console. The IP address is provisioned with cloud-init. [NOTE] @@ -27,4 +28,4 @@ If the VM is connected to the pod network, the pod network interface is the defa . Select the *Add network data* checkbox. . Enter the ethernet name, one or more IP addresses separated by commas, and the gateway address. . Click *Apply*. -. Click *Create VirtualMachine*. \ No newline at end of file +. Click *Create VirtualMachine*. diff --git a/modules/virt-configuring-live-migration-heavy.adoc b/modules/virt-configuring-live-migration-heavy.adoc index 8f99fc11363a..7a37b408aa7c 100644 --- a/modules/virt-configuring-live-migration-heavy.adoc +++ b/modules/virt-configuring-live-migration-heavy.adoc @@ -7,6 +7,7 @@ [id="virt-configuring-live-migration-heavy_{context}"] = Configure live migration for heavy workloads +[role="_abstract"] When migrating a VM running a heavy workload (for example, database processing) with higher memory dirty rates, you need a higher bandwidth to complete the migration. If the dirty rate is too high, the migration from one node to another does not converge. To prevent this, enable post copy mode. @@ -28,7 +29,8 @@ Configure live migration for heavy workloads by updating the `HyperConverged` cu $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} ---- + -.Example configuration file +Example configuration file: ++ [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1beta1 @@ -53,7 +55,7 @@ spec: <6> Use post copy mode when memory dirty rates are high to ensure the migration converges. Set `allowPostCopy` to `true` to enable post copy mode. . Optional: If your main network is too busy for the migration, configure a secondary, dedicated migration network. - ++ [NOTE] ==== Post copy mode can impact performance during the transfer, and should not be used for critical data, or with unstable networks. diff --git a/modules/virt-configuring-live-migration-limits.adoc b/modules/virt-configuring-live-migration-limits.adoc index 1a810c0598c7..1656539a4736 100644 --- a/modules/virt-configuring-live-migration-limits.adoc +++ b/modules/virt-configuring-live-migration-limits.adoc @@ -7,6 +7,7 @@ [id="virt-configuring-live-migration-limits_{context}"] = Configuring live migration limits and timeouts +[role="_abstract"] Configure live migration limits and timeouts for the cluster by updating the `HyperConverged` custom resource (CR), which is located in the `{CNVNamespace}` namespace. @@ -23,7 +24,9 @@ Configure live migration limits and timeouts for the cluster by updating the `Hy $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} ---- + -.Example configuration file +Example configuration file: ++ +-- [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1beta1 @@ -46,7 +49,8 @@ spec: <4> Maximum number of outbound migrations per node. Default: `2`. <5> The migration is canceled if memory copy fails to make progress in this time, in seconds. Default: `150`. <6> If a VM is running a heavy workload and the memory dirty rate is too high, this can prevent the migration from one node to another from converging. To prevent this, you can enable post copy mode. By default, `allowPostCopy` is set to `false`. - +-- ++ [NOTE] ==== You can restore the default value for any `spec.liveMigrationConfig` field by deleting that key/value pair and saving the file. For example, delete `progressTimeout: ` to restore the default `progressTimeout: 150`. diff --git a/modules/virt-configuring-masquerade-mode-cli.adoc b/modules/virt-configuring-masquerade-mode-cli.adoc index 99af7582d3cf..b0f61e3a6bdd 100644 --- a/modules/virt-configuring-masquerade-mode-cli.adoc +++ b/modules/virt-configuring-masquerade-mode-cli.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-masquerade-mode-cli_{context}"] = Configuring masquerade mode from the CLI +[role="_abstract"] You can use masquerade mode to hide a virtual machine's outgoing traffic behind the pod IP address. Masquerade mode uses Network Address Translation (NAT) to connect virtual machines to the pod network backend through a Linux bridge. diff --git a/modules/virt-configuring-masquerade-mode-dual-stack.adoc b/modules/virt-configuring-masquerade-mode-dual-stack.adoc index 2c4eb37293bb..7fbace05358a 100644 --- a/modules/virt-configuring-masquerade-mode-dual-stack.adoc +++ b/modules/virt-configuring-masquerade-mode-dual-stack.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-masquerade-mode-dual-stack_{context}"] = Configuring masquerade mode with dual-stack (IPv4 and IPv6) +[role="_abstract"] You can configure a new virtual machine (VM) to use both IPv6 and IPv4 on the default pod network by using cloud-init. The `Network.pod.vmIPv6NetworkCIDR` field in the virtual machine instance configuration determines the static IPv6 address of the VM and the gateway IP address. These are used by the virt-launcher pod to route IPv6 traffic to the virtual machine and are not used externally. The `Network.pod.vmIPv6NetworkCIDR` field specifies an IPv6 address block in Classless Inter-Domain Routing (CIDR) notation. The default value is `fd10:0:2::2/120`. You can edit this value based on your network requirements. diff --git a/modules/virt-configuring-node-exporter-service.adoc b/modules/virt-configuring-node-exporter-service.adoc index a6e698755846..251fa7c1b10f 100644 --- a/modules/virt-configuring-node-exporter-service.adoc +++ b/modules/virt-configuring-node-exporter-service.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-node-exporter-service_{context}"] = Configuring the node exporter service +[role="_abstract"] The node-exporter agent is deployed on every virtual machine in the cluster from which you want to collect metrics. Configure the node-exporter agent as a service to expose internal metrics and processes that are associated with virtual machines. .Prerequisites diff --git a/modules/virt-configuring-obsolete-cpu-models.adoc b/modules/virt-configuring-obsolete-cpu-models.adoc index e6789fed71e7..7cdb31008f03 100644 --- a/modules/virt-configuring-obsolete-cpu-models.adoc +++ b/modules/virt-configuring-obsolete-cpu-models.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-obsolete-cpu-models_{context}"] = Configuring obsolete CPU models +[role="_abstract"] You can configure a list of obsolete CPU models by editing the `HyperConverged` custom resource (CR). .Procedure diff --git a/modules/virt-configuring-pod-log-verbosity.adoc b/modules/virt-configuring-pod-log-verbosity.adoc index c2dfa21c0e9f..f3c6c7ed9807 100644 --- a/modules/virt-configuring-pod-log-verbosity.adoc +++ b/modules/virt-configuring-pod-log-verbosity.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-pod-log-verbosity_{context}"] = Configuring {VirtProductName} pod log verbosity +[role="_abstract"] You can configure the verbosity level of {VirtProductName} pod logs by editing the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-configuring-rate-limiters.adoc b/modules/virt-configuring-rate-limiters.adoc index 8874115a0e3c..20da4a29310b 100644 --- a/modules/virt-configuring-rate-limiters.adoc +++ b/modules/virt-configuring-rate-limiters.adoc @@ -7,8 +7,9 @@ [id="virt-configuring-rate-limiters_{context}"] = Configuring rate limiters +[role="_abstract"] To compensate for large-scale burst rates, scale the `QPS` (Queries per Second) and `burst` rate limits to process a higher rate of client requests or API calls concurrently for each component. .Procedure -* Apply a `jsonpatch` annotation to adjust the `kubevirt-hyperconverged` cluster configuration by using `tuningPolicy` to apply scalable tuning parameters. This tuning policy automatically adjusts all virtualization components (`webhook`, `api`, `controller`, `handler`) to match the `QPS` and `burst` values specified by the profile. \ No newline at end of file +* Apply a `jsonpatch` annotation to adjust the `kubevirt-hyperconverged` cluster configuration by using `tuningPolicy` to apply scalable tuning parameters. This tuning policy automatically adjusts all virtualization components (`webhook`, `api`, `controller`, `handler`) to match the `QPS` and `burst` values specified by the profile. diff --git a/modules/virt-configuring-runstrategy-vm.adoc b/modules/virt-configuring-runstrategy-vm.adoc index 7eb939f8a8fc..24db3cda753c 100644 --- a/modules/virt-configuring-runstrategy-vm.adoc +++ b/modules/virt-configuring-runstrategy-vm.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-runstrategy-vm_{context}"] = Configuring a VM run strategy by using the CLI +[role="_abstract"] You can configure a run strategy for a virtual machine (VM) by using the command line. .Prerequisites @@ -21,7 +22,8 @@ You can configure a run strategy for a virtual machine (VM) by using the command $ oc edit vm -n ---- + -.Example run strategy +Example run strategy: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-configuring-secondary-dns-server.adoc b/modules/virt-configuring-secondary-dns-server.adoc index 90e88244021e..7ecadcd8aaae 100644 --- a/modules/virt-configuring-secondary-dns-server.adoc +++ b/modules/virt-configuring-secondary-dns-server.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-secondary-dns-server_{context}"] = Configuring a DNS server for secondary networks +[role="_abstract"] The Cluster Network Addons Operator (CNAO) deploys a Domain Name Server (DNS) server and monitoring components when you enable the `deployKubeSecondaryDNS` feature gate in the `HyperConverged` custom resource (CR). .Prerequisites @@ -56,7 +57,8 @@ $ oc expose -n {CNVNamespace} deployment/secondary-dns --name=dns-lb \ $ oc get service -n {CNVNamespace} ---- + -.Example output +Example output: ++ [source,text] ---- NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE @@ -96,7 +98,8 @@ spec: $ oc get dnses.config.openshift.io cluster -o jsonpath='{.spec.baseDomain}' ---- + -.Example output +Example output: ++ [source,text] ---- openshift.example.com diff --git a/modules/virt-configuring-secondary-network-vm-live-migration.adoc b/modules/virt-configuring-secondary-network-vm-live-migration.adoc index 499861c03a56..ff6982e5ec58 100644 --- a/modules/virt-configuring-secondary-network-vm-live-migration.adoc +++ b/modules/virt-configuring-secondary-network-vm-live-migration.adoc @@ -7,6 +7,7 @@ [id="virt-configuring-secondary-network-vm-live-migration_{context}"] = Configuring a dedicated secondary network for live migration +[role="_abstract"] To configure a dedicated secondary network for live migration, you must first create a bridge network attachment definition (NAD) by using the CLI. Then, you add the name of the `NetworkAttachmentDefinition` object to the `HyperConverged` custom resource (CR). .Prerequisites @@ -20,7 +21,6 @@ To configure a dedicated secondary network for live migration, you must first cr . Create a `NetworkAttachmentDefinition` manifest according to the following example: + -.Example configuration file [source,yaml,subs="attributes+"] ---- apiVersion: "k8s.cni.cncf.io/v1" @@ -53,9 +53,10 @@ spec: $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} ---- -. Add the name of the `NetworkAttachmentDefinition` object to the `spec.liveMigrationConfig` stanza of the `HyperConverged` CR: +. Add the name of the `NetworkAttachmentDefinition` object to the `spec.liveMigrationConfig` stanza of the `HyperConverged` CR. ++ +Example `HyperConverged` manifest: + -.Example `HyperConverged` manifest [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1beta1 diff --git a/modules/virt-configuring-storage-class-bootsource-update.adoc b/modules/virt-configuring-storage-class-bootsource-update.adoc index b85df03725b9..07b49dd31924 100644 --- a/modules/virt-configuring-storage-class-bootsource-update.adoc +++ b/modules/virt-configuring-storage-class-bootsource-update.adoc @@ -7,6 +7,7 @@ [id="virt-configuring-storage-class-bootsource-update_{context}"] = Configuring a storage class for boot source images +[role="_abstract"] You can configure a specific storage class in the `HyperConverged` resource. [IMPORTANT] diff --git a/modules/virt-configuring-vm-disk-sharing.adoc b/modules/virt-configuring-vm-disk-sharing.adoc index 7bb23b069a83..64e444d773f4 100644 --- a/modules/virt-configuring-vm-disk-sharing.adoc +++ b/modules/virt-configuring-vm-disk-sharing.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-vm-disk-sharing_{context}"] = Configuring disk sharing by using virtual machine disks +[role="_abstract"] You can configure block volumes so that multiple virtual machines (VMs) can share storage. The application running on the guest operating system determines the storage option you must configure for the VM. A disk of type `disk` exposes the volume as an ordinary disk to the VM. @@ -58,4 +59,4 @@ spec: <1> Identifies the error policy. <2> Identifies a shared disk. -. Save the `VirtualMachine` manifest file to apply your changes. \ No newline at end of file +. Save the `VirtualMachine` manifest file to apply your changes. diff --git a/modules/virt-configuring-vm-dpdk.adoc b/modules/virt-configuring-vm-dpdk.adoc index 030f5671e100..53ad32707888 100644 --- a/modules/virt-configuring-vm-dpdk.adoc +++ b/modules/virt-configuring-vm-dpdk.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-vm-dpdk_{context}"] = Configuring a virtual machine for DPDK workloads +[role="_abstract"] You can run Data Packet Development Kit (DPDK) workloads on virtual machines (VMs) to achieve lower latency and higher throughput for faster packet processing in the user space. DPDK uses the SR-IOV network for hardware-based I/O sharing. .Prerequisites @@ -14,9 +15,10 @@ You can run Data Packet Development Kit (DPDK) workloads on virtual machines (VM * You have installed the {oc-first}. .Procedure -. Edit the `VirtualMachine` manifest to include information about the SR-IOV network interface, CPU topology, CRI-O annotations, and huge pages: +. Edit the `VirtualMachine` manifest to include information about the SR-IOV network interface, CPU topology, CRI-O annotations, and huge pages. ++ +Example `VirtualMachine` manifest: + -.Example `VirtualMachine` manifest [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-configuring-vm-eviction-strategy-cli.adoc b/modules/virt-configuring-vm-eviction-strategy-cli.adoc index 90e987cd6a4f..23ad3db99a65 100644 --- a/modules/virt-configuring-vm-eviction-strategy-cli.adoc +++ b/modules/virt-configuring-vm-eviction-strategy-cli.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-vm-eviction-strategy-cli_{context}"] = Configuring a VM eviction strategy using the CLI +[role="_abstract"] You can configure an eviction strategy for a virtual machine (VM) by using the command line. [IMPORTANT] @@ -28,7 +29,8 @@ You must set the eviction strategy of non-migratable VMs to `LiveMigrateIfPossib $ oc edit vm -n ---- + -.Example eviction strategy +Example eviction strategy: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-configuring-vm-project-dpdk.adoc b/modules/virt-configuring-vm-project-dpdk.adoc index 27a52f29d291..40d58b84c0a3 100644 --- a/modules/virt-configuring-vm-project-dpdk.adoc +++ b/modules/virt-configuring-vm-project-dpdk.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-vm-project-dpdk_{context}"] = Configuring a project for DPDK workloads +[role="_abstract"] You can configure the project to run DPDK workloads on SR-IOV hardware. .Prerequisites @@ -22,7 +23,8 @@ $ oc create ns dpdk-ns . Create an `SriovNetwork` object that references the `SriovNetworkNodePolicy` object. When you create an `SriovNetwork` object, the SR-IOV Network Operator automatically creates a `NetworkAttachmentDefinition` object. + -.Example `SriovNetwork` manifest +Example `SriovNetwork` manifest: ++ [source,yaml] ---- apiVersion: sriovnetwork.openshift.io/v1 @@ -51,4 +53,4 @@ spec: <1> The namespace where the `NetworkAttachmentDefinition` object is deployed. <2> The value of the `spec.resourceName` attribute of the `SriovNetworkNodePolicy` object that was created when configuring the cluster for DPDK workloads. -. Optional: Run the virtual machine latency checkup to verify that the network is properly configured. \ No newline at end of file +. Optional: Run the virtual machine latency checkup to verify that the network is properly configured. diff --git a/modules/virt-configuring-vm-real-time.adoc b/modules/virt-configuring-vm-real-time.adoc index 8f1e340e8078..5dddea82192f 100644 --- a/modules/virt-configuring-vm-real-time.adoc +++ b/modules/virt-configuring-vm-real-time.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-vm-real-time_{context}"] = Configuring a virtual machine for real-time workloads +[role="_abstract"] You can configure a virtual machine (VM) to run real-time workloads. .Prerequisites @@ -14,9 +15,10 @@ You can configure a virtual machine (VM) to run real-time workloads. * You have installed the {oc-first}. .Procedure -. Create a `VirtualMachine` manifest to include information about CPU topology, CRI-O annotations, and huge pages: +. Create a `VirtualMachine` manifest to include information about CPU topology, CRI-O annotations, and huge pages. ++ +Example `VirtualMachine` manifest: + -.Example `VirtualMachine` manifest [source,yaml] ---- apiVersion: kubevirt.io/v1 @@ -173,12 +175,16 @@ isolate_managed_irq=Y <2> ---- # cyclictest --priority 1 --policy fifo -h 50 -a 2-3 --mainaffinity 0,1 -t 2 -m -q -i 200 -D 12h ---- ++ where: ++ +-- `-a`:: Specifies the CPU set on which the test runs. This is the same as the isolated CPUs that you configured in the `realtime-variables.conf` file. `-D`:: Specifies the test duration. Append `m`, `h`, or `d` to specify minutes, hours or days. - +-- ++ +Example output: + -.Example output [source,terminal] ---- # Min Latencies: 00004 00004 diff --git a/modules/virt-configuring-vm-use-usb-device.adoc b/modules/virt-configuring-vm-use-usb-device.adoc index 63601ccc73fa..56c7d68fad7c 100644 --- a/modules/virt-configuring-vm-use-usb-device.adoc +++ b/modules/virt-configuring-vm-use-usb-device.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-vm-use-usb-device_{context}"] = Connecting a USB device to a virtual machine +[role="_abstract"] You can configure virtual machine (VM) access to a USB device. This configuration enables the VM to connect to USB hardware that is attached to an {product-title} node, as if the hardware and the VM are physically connected. .Prerequisites @@ -83,4 +84,4 @@ $ oc apply -f .yaml + where: -:: Specifies the name of the `VirtualMachineInstance` manifest YAML file. \ No newline at end of file +:: Specifies the name of the `VirtualMachineInstance` manifest YAML file. diff --git a/modules/virt-configuring-vm-with-node-exporter-service.adoc b/modules/virt-configuring-vm-with-node-exporter-service.adoc index ea6a8a098474..0174898d9eb6 100644 --- a/modules/virt-configuring-vm-with-node-exporter-service.adoc +++ b/modules/virt-configuring-vm-with-node-exporter-service.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-vm-with-node-exporter-service_{context}"] = Configuring a virtual machine with the node exporter service +[role="_abstract"] Download the `node-exporter` file on to the virtual machine. Then, create a `systemd` service that runs the node-exporter service when the virtual machine boots. .Prerequisites @@ -72,7 +73,8 @@ $ sudo systemctl start node_exporter.service $ curl http://localhost:9100/metrics ---- + -.Example output +Example output: ++ [source,terminal] ---- go_gc_duration_seconds{quantile="0"} 1.5244e-05 diff --git a/modules/virt-configuring-vm-with-persistent-efi.adoc b/modules/virt-configuring-vm-with-persistent-efi.adoc index 079430f43542..6d8d36f491ea 100644 --- a/modules/virt-configuring-vm-with-persistent-efi.adoc +++ b/modules/virt-configuring-vm-with-persistent-efi.adoc @@ -6,6 +6,7 @@ [id="configuring-vm-with-persistent-efi_{context}"] = Configuring VMs with persistent EFI +[role="_abstract"] You can configure a VM to have EFI persistence enabled by editing its manifest file. .Prerequisites @@ -31,4 +32,4 @@ spec: efi: persistent: true # ... ----- \ No newline at end of file +---- diff --git a/modules/virt-configuring-workload-update-methods.adoc b/modules/virt-configuring-workload-update-methods.adoc index d89e4e69b36b..da343af086f0 100644 --- a/modules/virt-configuring-workload-update-methods.adoc +++ b/modules/virt-configuring-workload-update-methods.adoc @@ -6,6 +6,7 @@ [id="virt-configuring-workload-update-methods_{context}"] = Configuring workload update methods +[role="_abstract"] You can configure workload update methods by editing the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-confirming-policy-updates-on-nodes.adoc b/modules/virt-confirming-policy-updates-on-nodes.adoc index 8e0988b0f632..c5afde34c453 100644 --- a/modules/virt-confirming-policy-updates-on-nodes.adoc +++ b/modules/virt-confirming-policy-updates-on-nodes.adoc @@ -6,7 +6,9 @@ [id="virt-confirming-policy-updates-on-nodes_{context}"] = Confirming node network policy updates on nodes +[role="_abstract"] When you apply a node network policy, a `NodeNetworkConfigurationEnactment` object is created for every node in the cluster. The node network configuration enactment is a read-only object that represents the status of execution of the policy on that node. + If the policy fails to be applied on the node, the enactment for that node includes a traceback for troubleshooting. .Prerequisites diff --git a/modules/virt-connecting-secondary-network-ssh.adoc b/modules/virt-connecting-secondary-network-ssh.adoc index d43f5ee97dfc..7383feee4cea 100644 --- a/modules/virt-connecting-secondary-network-ssh.adoc +++ b/modules/virt-connecting-secondary-network-ssh.adoc @@ -6,6 +6,7 @@ [id="virt-connecting-secondary-network-ssh_{context}"] = Connecting to a VM attached to a secondary network by using SSH +[role="_abstract"] You can connect to a virtual machine (VM) attached to a secondary network by using SSH. .Prerequisites @@ -23,7 +24,8 @@ You can connect to a virtual machine (VM) attached to a secondary network by usi $ oc describe vm -n ---- + -.Example output +Example output: ++ ---- # ... Interfaces: @@ -44,7 +46,8 @@ Interfaces: $ ssh @ -i ---- + -.Example +Example: ++ [source,terminal] ---- $ ssh cloud-user@10.244.0.37 -i ~/.ssh/id_rsa_cloud-user diff --git a/modules/virt-connecting-service-ssh.adoc b/modules/virt-connecting-service-ssh.adoc index c1d59b233c98..7c7a03178284 100644 --- a/modules/virt-connecting-service-ssh.adoc +++ b/modules/virt-connecting-service-ssh.adoc @@ -7,6 +7,7 @@ [id="virt-connecting-service-ssh_{context}"] = Connecting to a VM exposed by a service by using SSH +[role="_abstract"] You can connect to a virtual machine (VM) that is exposed by a service by using SSH. .Prerequisites diff --git a/modules/virt-connecting-to-vm-console-web.adoc b/modules/virt-connecting-to-vm-console-web.adoc index 3226321495a1..0f2bd711c4d2 100644 --- a/modules/virt-connecting-to-vm-console-web.adoc +++ b/modules/virt-connecting-to-vm-console-web.adoc @@ -23,9 +23,11 @@ endif::[] = Connecting to the {console} by using the web console ifdef::vnc-console,serial-console[] +[role="_abstract"] You can connect to the {console} of a virtual machine (VM) by using the {product-title} web console. endif::[] ifdef::desktop-viewer[] +[role="_abstract"] You can connect to the {console} of a Windows virtual machine (VM) by using the {product-title} web console. endif::[] diff --git a/modules/virt-connecting-vm-internal-fqdn.adoc b/modules/virt-connecting-vm-internal-fqdn.adoc index e2b12a65ce2a..33eda3fb6b83 100644 --- a/modules/virt-connecting-vm-internal-fqdn.adoc +++ b/modules/virt-connecting-vm-internal-fqdn.adoc @@ -6,6 +6,7 @@ [id="virt-connecting-vm-internal-fqdn_{context}"] = Connecting to a virtual machine by using its internal FQDN +[role="_abstract"] You can connect to a virtual machine (VM) by using its internal fully qualified domain name (FQDN). .Prerequisites @@ -29,7 +30,8 @@ $ virtctl console vm-fedora $ ping myvm.mysubdomain..svc.cluster.local ---- + -.Example output +Example output: ++ [source,terminal] ---- PING myvm.mysubdomain.default.svc.cluster.local (10.244.0.57) 56(84) bytes of data. diff --git a/modules/virt-connecting-vm-secondarynw-using-fqdn.adoc b/modules/virt-connecting-vm-secondarynw-using-fqdn.adoc index c895ebbca914..c956aaa8abce 100644 --- a/modules/virt-connecting-vm-secondarynw-using-fqdn.adoc +++ b/modules/virt-connecting-vm-secondarynw-using-fqdn.adoc @@ -6,6 +6,7 @@ [id="virt-connecting-vm-secondarynw-fqdn_{context}"] = Connecting to a VM on a secondary network by using the cluster FQDN +[role="_abstract"] You can access a running virtual machine (VM) attached to a secondary network interface by using the fully qualified domain name (FQDN) of the cluster. .Prerequisites @@ -32,7 +33,8 @@ $ oc get dnses.config.openshift.io cluster -o json | jq .spec.baseDomain $ oc get vm -n -o yaml ---- + -.Example output +Example output: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-connecting-vm-virtctl.adoc b/modules/virt-connecting-vm-virtctl.adoc index e69f0c444397..9ebace48e084 100644 --- a/modules/virt-connecting-vm-virtctl.adoc +++ b/modules/virt-connecting-vm-virtctl.adoc @@ -15,6 +15,7 @@ endif::[] [id="virt-connecting-vm-virtctl_{context}"] = Connecting to the {console} by using virtctl +[role="_abstract"] You can use the `virtctl` command-line tool to connect to the {console} of a running virtual machine. ifdef::vnc-console[] @@ -61,4 +62,4 @@ ifeval::["{context}" == "vnc-console"] endif::[] ifeval::["{context}" == "serial-console"] :!console: -endif::[] \ No newline at end of file +endif::[] diff --git a/modules/virt-connecting-vnc-console.adoc b/modules/virt-connecting-vnc-console.adoc index 5ce14d75a067..f88ece2de61c 100644 --- a/modules/virt-connecting-vnc-console.adoc +++ b/modules/virt-connecting-vnc-console.adoc @@ -6,6 +6,7 @@ [id="virt-connecting-vnc-console_{context}"] = Connecting to the VNC console +[role="_abstract"] Connect to the VNC console of a running virtual machine from the *Console* tab on the *VirtualMachine details* page of the web console. diff --git a/modules/virt-controlling-multiple-vms.adoc b/modules/virt-controlling-multiple-vms.adoc index 34aae090485f..bbafbe91768b 100644 --- a/modules/virt-controlling-multiple-vms.adoc +++ b/modules/virt-controlling-multiple-vms.adoc @@ -6,6 +6,7 @@ [id="virt-controlling-multiple-vms-web_{context}"] = Controlling the state of multiple virtual machines +[role="_abstract"] You can start, stop, restart, pause, and unpause multiple virtual machines (VMs) from the web console. .Procedure diff --git a/modules/virt-create-node-network-config-console.adoc b/modules/virt-create-node-network-config-console.adoc index 55cf6d73954d..09be8eef1317 100644 --- a/modules/virt-create-node-network-config-console.adoc +++ b/modules/virt-create-node-network-config-console.adoc @@ -2,6 +2,7 @@ [id="virt-create-node-network-config-console_{context}"] = Creating a policy +[role="_abstract"] You can create a policy by using either a form or YAML in the web console. When creating a policy using a form, you can see how the new policy changes the topology of the nodes in your cluster in real time. .Procedure @@ -54,4 +55,4 @@ If you have selected *DHCP* option, uncheck the options that you want to disable Alternatively, you can click *Edit YAML* on the top of the page to continue editing the form using YAML. ==== . Click *Next* to go to the *Review* section of the form. -. Verify the settings and click *Create* to create the policy. \ No newline at end of file +. Verify the settings and click *Create* to create the policy. diff --git a/modules/virt-creating-a-primary-cluster-udn.adoc b/modules/virt-creating-a-primary-cluster-udn.adoc index c15fc03f1321..76bcf86e1262 100644 --- a/modules/virt-creating-a-primary-cluster-udn.adoc +++ b/modules/virt-creating-a-primary-cluster-udn.adoc @@ -6,6 +6,7 @@ [id="virt-creating-a-primary-cluster-udn_{context}"] = Creating a primary cluster-scoped user-defined network by using the CLI +[role="_abstract"] You can connect multiple namespaces to the same primary user-defined network (UDN) to achieve native tenant isolation by using the CLI. .Prerequisites @@ -13,9 +14,10 @@ You can connect multiple namespaces to the same primary user-defined network (UD * You have installed the {oc-first}. .Procedure -. Create a `ClusterUserDefinedNetwork` object to specify the custom network configuration: +. Create a `ClusterUserDefinedNetwork` object to specify the custom network configuration. ++ +Example `ClusterUserDefinedNetwork` manifest: + -.Example `ClusterUserDefinedNetwork` manifest [source,yaml] ---- kind: ClusterUserDefinedNetwork diff --git a/modules/virt-creating-a-primary-udn.adoc b/modules/virt-creating-a-primary-udn.adoc index 16940913a347..69fa9202007e 100644 --- a/modules/virt-creating-a-primary-udn.adoc +++ b/modules/virt-creating-a-primary-udn.adoc @@ -6,6 +6,7 @@ [id="virt-creating-a-primary-udn_{context}"] = Creating a primary namespace-scoped user-defined network by using the CLI +[role="_abstract"] You can create an isolated primary network in your project namespace by using the CLI. You must use the OVN-Kubernetes layer 2 topology and enable persistent IP address allocation in the user-defined network (UDN) configuration to ensure VM live migration support. .Prerequisites @@ -13,9 +14,10 @@ You can create an isolated primary network in your project namespace by using th * You have created a namespace and applied the `k8s.ovn.org/primary-user-defined-network` label. .Procedure -. Create a `UserDefinedNetwork` object to specify the custom network configuration: +. Create a `UserDefinedNetwork` object to specify the custom network configuration. ++ +Example `UserDefinedNetwork` manifest: + -.Example `UserDefinedNetwork` manifest [source,yaml] ---- apiVersion: k8s.ovn.org/v1 diff --git a/modules/virt-creating-an-upload-dv.adoc b/modules/virt-creating-an-upload-dv.adoc index bcef4053be70..fe5b0b369d24 100644 --- a/modules/virt-creating-an-upload-dv.adoc +++ b/modules/virt-creating-an-upload-dv.adoc @@ -6,6 +6,7 @@ [id="virt-creating-an-upload-dv_{context}"] = Creating an upload data volume +[role="_abstract"] You can manually create a data volume with an `upload` data source to upload local disk images. .Prerequisites diff --git a/modules/virt-creating-and-exposing-mediated-devices.adoc b/modules/virt-creating-and-exposing-mediated-devices.adoc index 480a752e4b4c..92c798f05bd2 100644 --- a/modules/virt-creating-and-exposing-mediated-devices.adoc +++ b/modules/virt-creating-and-exposing-mediated-devices.adoc @@ -6,6 +6,7 @@ [id="virt-creating-exposing-mediated-devices_{context}"] = Creating and exposing mediated devices +[role="_abstract"] As an administrator, you can create mediated devices and expose them to the cluster by editing the `HyperConverged` custom resource (CR). Before you edit the CR, explore a worker node to find the configuration values that are specific to your hardware devices. .Prerequisites diff --git a/modules/virt-creating-custom-monitoring-label-for-vms.adoc b/modules/virt-creating-custom-monitoring-label-for-vms.adoc index 103128b54f23..ba26688ef6ed 100644 --- a/modules/virt-creating-custom-monitoring-label-for-vms.adoc +++ b/modules/virt-creating-custom-monitoring-label-for-vms.adoc @@ -6,7 +6,8 @@ [id="virt-creating-custom-monitoring-label-for-vms_{context}"] = Creating a custom monitoring label for virtual machines -To enable queries to multiple virtual machines from a single service, add a custom label in the virtual machine's YAML file. +[role="_abstract"] +To enable queries to multiple virtual machines from a single service, you can add a custom label in the virtual machine's YAML file. .Prerequisites diff --git a/modules/virt-creating-filesystem-fusion-access-san.adoc b/modules/virt-creating-filesystem-fusion-access-san.adoc index 9063db9cbbf6..77e0cbd8c692 100644 --- a/modules/virt-creating-filesystem-fusion-access-san.adoc +++ b/modules/virt-creating-filesystem-fusion-access-san.adoc @@ -6,6 +6,7 @@ [id="creating-filesystem-fusion-access-san_{context}"] = Creating a file system with {FusionSAN} +[role="_abstract"] You need to create a file system to represent your required storage. The file system is based on the storage available in the worker nodes you selected when creating the storage cluster. diff --git a/modules/virt-creating-fusionaccess-cr.adoc b/modules/virt-creating-fusionaccess-cr.adoc index f005d06d3690..c875d4c23d5e 100644 --- a/modules/virt-creating-fusionaccess-cr.adoc +++ b/modules/virt-creating-fusionaccess-cr.adoc @@ -6,6 +6,7 @@ [id="creating-fusionaccess-cr_{context}"] = Creating the FusionAccess CR +[role="_abstract"] After installing the {FusionSAN} Operator and creating a Kubernetes pull secret, you must create the `FusionAccess` custom resource (CR). Creating the `FusionAccess` CR triggers the installation of the correct version of IBM Storage Scale and detects worker nodes with shared LUNs. diff --git a/modules/virt-creating-headless-services.adoc b/modules/virt-creating-headless-services.adoc index c2f448e2c336..907366ad993e 100644 --- a/modules/virt-creating-headless-services.adoc +++ b/modules/virt-creating-headless-services.adoc @@ -6,6 +6,7 @@ [id="virt-creating-headless-services_{context}"] = Creating a headless service in a project by using the CLI +[role="_abstract"] To create a headless service in a namespace, add the `clusterIP: None` parameter to the service YAML definition. .Prerequisites @@ -42,4 +43,4 @@ spec: [source,terminal] ---- $ oc create -f headless_service.yaml ----- \ No newline at end of file +---- diff --git a/modules/virt-creating-hpp-basic-storage-pool.adoc b/modules/virt-creating-hpp-basic-storage-pool.adoc index cdcfe272e8ae..8f00bf805229 100644 --- a/modules/virt-creating-hpp-basic-storage-pool.adoc +++ b/modules/virt-creating-hpp-basic-storage-pool.adoc @@ -7,6 +7,7 @@ [id="virt-creating-hpp-basic-storage-pool_{context}"] = Creating a hostpath provisioner with a basic storage pool +[role="_abstract"] You configure a hostpath provisioner (HPP) with a basic storage pool by creating an HPP custom resource (CR) with a `storagePools` stanza. The storage pool specifies the name and path used by the CSI driver. [IMPORTANT] diff --git a/modules/virt-creating-infiniband-interface-on-nodes.adoc b/modules/virt-creating-infiniband-interface-on-nodes.adoc index 40d15ec64b32..e74a41822485 100644 --- a/modules/virt-creating-infiniband-interface-on-nodes.adoc +++ b/modules/virt-creating-infiniband-interface-on-nodes.adoc @@ -6,7 +6,10 @@ [id="virt-creating-infiniband-interface-on-nodes_{context}"] = Creating an IP over InfiniBand interface on nodes -On the {product-title} web console, you can install a Red{nbsp}Hat certified third-party Operator, such as the NVIDIA Network Operator, that supports IP over InfiniBand (IPoIB) mode. Typically, you would use the third-party Operator with other vendor infrastructure to manage resources in an {product-title} cluster. To create an IPoIB interface on nodes in your cluster, you must define an InfiniBand (IPoIB) interface in a `NodeNetworkConfigurationPolicy` (NNCP) manifest file. +[role="_abstract"] +On the {product-title} web console, you can install a Red{nbsp}Hat certified third-party Operator, such as the NVIDIA Network Operator, that supports IP over InfiniBand (IPoIB) mode. Typically, you would use the third-party Operator with other vendor infrastructure to manage resources in an {product-title} cluster. + +To create an IPoIB interface on nodes in your cluster, you must define an InfiniBand (IPoIB) interface in a `NodeNetworkConfigurationPolicy` (NNCP) manifest file. If you need to attach IPoIB to a bond interface, only the `active-backup` mode supports this configuration. diff --git a/modules/virt-creating-interface-on-nodes.adoc b/modules/virt-creating-interface-on-nodes.adoc index bdd6d6dcb087..8ff52887f7e8 100644 --- a/modules/virt-creating-interface-on-nodes.adoc +++ b/modules/virt-creating-interface-on-nodes.adoc @@ -6,7 +6,8 @@ [id="virt-creating-interface-on-nodes_{context}"] = Creating an interface on nodes -Create an interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` (NNCP) manifest to the cluster. The manifest details the requested configuration for the interface. +[role="_abstract"] +You can create an interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` (NNCP) manifest to the cluster. The manifest details the requested configuration for the interface. By default, the manifest applies to all nodes in the cluster. To add the interface to specific nodes, add the `spec: nodeSelector` parameter and the appropriate `:` for your node selector. diff --git a/modules/virt-creating-layer2-nad-cli.adoc b/modules/virt-creating-layer2-nad-cli.adoc index 720063ba97ea..4638d66ce5c1 100644 --- a/modules/virt-creating-layer2-nad-cli.adoc +++ b/modules/virt-creating-layer2-nad-cli.adoc @@ -6,6 +6,7 @@ [id="virt-creating-layer2-nad-cli_{context}"] = Creating a NAD for layer 2 topology by using the CLI +[role="_abstract"] You can create a network attachment definition (NAD) which describes how to attach a pod to the layer 2 overlay network. .Prerequisites diff --git a/modules/virt-creating-linux-bridge-nad-cli.adoc b/modules/virt-creating-linux-bridge-nad-cli.adoc index 48a05abc9948..f878b2fe472e 100644 --- a/modules/virt-creating-linux-bridge-nad-cli.adoc +++ b/modules/virt-creating-linux-bridge-nad-cli.adoc @@ -6,6 +6,7 @@ [id="virt-creating-linux-bridge-nad-cli_{context}"] = Creating a Linux bridge NAD by using the CLI +[role="_abstract"] You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines (VMs) by using the command line. The NAD and the VM must be in the same namespace. diff --git a/modules/virt-creating-linux-bridge-nad-web.adoc b/modules/virt-creating-linux-bridge-nad-web.adoc index e68174cb7d28..f1596f5a0ab1 100644 --- a/modules/virt-creating-linux-bridge-nad-web.adoc +++ b/modules/virt-creating-linux-bridge-nad-web.adoc @@ -8,6 +8,7 @@ [id="virt-creating-linux-bridge-nad-web_{context}"] = Creating a Linux bridge NAD by using the web console +[role="_abstract"] You can create a network attachment definition (NAD) to provide layer-2 networking to pods and virtual machines by using the {product-title} web console. [WARNING] @@ -36,4 +37,4 @@ OSA interfaces on {ibm-z-name} do not support VLAN filtering and VLAN-tagged tra ==== + . Optional: Select *MAC Spoof Check* to enable MAC spoof filtering. This feature provides security against a MAC spoofing attack by allowing only a single MAC address to exit the pod. -. Click *Create*. \ No newline at end of file +. Click *Create*. diff --git a/modules/virt-creating-linux-bridge-nncp.adoc b/modules/virt-creating-linux-bridge-nncp.adoc index 7f4645a97c92..7d89e5947bcf 100644 --- a/modules/virt-creating-linux-bridge-nncp.adoc +++ b/modules/virt-creating-linux-bridge-nncp.adoc @@ -7,6 +7,7 @@ [id="virt-creating-linux-bridge-nncp_{context}"] = Creating a Linux bridge NNCP +[role="_abstract"] You can create a `NodeNetworkConfigurationPolicy` (NNCP) manifest for a Linux bridge network. .Prerequisites @@ -16,6 +17,7 @@ You can create a `NodeNetworkConfigurationPolicy` (NNCP) manifest for a Linux br * Create the `NodeNetworkConfigurationPolicy` manifest. This example includes sample values that you must replace with your own information. + +-- [source,yaml] ---- apiVersion: nmstate.io/v1 @@ -46,7 +48,8 @@ spec: <6> Disables IPv4 in this example. <7> Disables STP in this example. <8> The node NIC to which the bridge is attached. - +-- ++ [NOTE] ==== To create the NNCP manifest for a Linux bridge using OSA with {ibm-z-name}, you must disable VLAN filtering by the setting the `rx-vlan-filter` to `false` in the `NodeNetworkConfigurationPolicy` manifest. @@ -57,4 +60,4 @@ Alternatively, if you have SSH access to the node, you can disable VLAN filterin ---- $ sudo ethtool -K rx-vlan-filter off ---- -==== \ No newline at end of file +==== diff --git a/modules/virt-creating-local-block-pv.adoc b/modules/virt-creating-local-block-pv.adoc index 3329168a989c..bb8352b13e6e 100644 --- a/modules/virt-creating-local-block-pv.adoc +++ b/modules/virt-creating-local-block-pv.adoc @@ -8,6 +8,7 @@ [id="virt-creating-local-block-pv_{context}"] = Creating a local block persistent volume +[role="_abstract"] If you intend to import a virtual machine image into block storage with a data volume, you must have an available local block persistent volume. Create a local block persistent volume (PV) on a node by populating a file and diff --git a/modules/virt-creating-localnet-nad-cli.adoc b/modules/virt-creating-localnet-nad-cli.adoc index df0bf0afd9c2..91ae69fc7639 100644 --- a/modules/virt-creating-localnet-nad-cli.adoc +++ b/modules/virt-creating-localnet-nad-cli.adoc @@ -6,6 +6,7 @@ [id="virt-creating-localnet-nad-cli_{context}"] = Creating a NAD for localnet topology using the CLI +[role="_abstract"] You can create a network attachment definition (NAD) which describes how to attach a pod to the underlying physical network. .Prerequisites diff --git a/modules/virt-creating-long-lived-account-and-token.adoc b/modules/virt-creating-long-lived-account-and-token.adoc index 104790d1099b..17aa5636b526 100644 --- a/modules/virt-creating-long-lived-account-and-token.adoc +++ b/modules/virt-creating-long-lived-account-and-token.adoc @@ -4,7 +4,7 @@ :_mod-docs-content-type: PROCEDURE [id="virt-creating-long-lived-account-and-token_{context}"] -== Creating the long-lived service account and token to use with MTV providers += Creating the long-lived service account and token to use with MTV providers [role="_abstract"] When you register an {VirtProductName} provider in the {mtv-first} web console, you must supply credentials that allow MTV to interact with the cluster. Creating a long-lived service account and cluster role binding gives MTV persistent permissions to read and create virtual machine resources during migration. diff --git a/modules/virt-creating-nad-l2-overlay-console.adoc b/modules/virt-creating-nad-l2-overlay-console.adoc index 21af842cc640..3691de12c9e9 100644 --- a/modules/virt-creating-nad-l2-overlay-console.adoc +++ b/modules/virt-creating-nad-l2-overlay-console.adoc @@ -6,6 +6,7 @@ [id="virt-creating-nad-l2-overlay-console_{context}"] = Creating a NAD for layer 2 topology by using the web console +[role="_abstract"] You can create a network attachment definition (NAD) that describes how to attach a pod to the layer 2 overlay network. .Prerequisites @@ -21,4 +22,4 @@ You can create a network attachment definition (NAD) that describes how to attac . Select *OVN Kubernetes L2 overlay network* from the *Network Type* list. -. Click *Create*. \ No newline at end of file +. Click *Create*. diff --git a/modules/virt-creating-nad-localnet-console.adoc b/modules/virt-creating-nad-localnet-console.adoc index 54d35459bef9..5b56a831d8b3 100644 --- a/modules/virt-creating-nad-localnet-console.adoc +++ b/modules/virt-creating-nad-localnet-console.adoc @@ -6,6 +6,7 @@ [id="virt-creating-nad-localnet-console_{context}"] = Creating a NAD for localnet topology using the web console +[role="_abstract"] You can create a network attachment definition (NAD) to connect workloads to a physical network by using the {product-title} web console. .Prerequisites @@ -28,4 +29,4 @@ You can create a network attachment definition (NAD) to connect workloads to a p . Optional: Encapsulate the traffic in a VLAN. The default value is none. -. Click *Create*. \ No newline at end of file +. Click *Create*. diff --git a/modules/virt-creating-new-vm-from-cloned-pvc-using-datavolumetemplate.adoc b/modules/virt-creating-new-vm-from-cloned-pvc-using-datavolumetemplate.adoc index 0de6292020c1..f600529e00d5 100644 --- a/modules/virt-creating-new-vm-from-cloned-pvc-using-datavolumetemplate.adoc +++ b/modules/virt-creating-new-vm-from-cloned-pvc-using-datavolumetemplate.adoc @@ -6,8 +6,11 @@ [id="virt-creating-new-vm-from-cloned-pvc-using-datavolumetemplate_{context}"] = Creating a new virtual machine from a cloned persistent volume claim by using a data volume template +[role="_abstract"] You can create a virtual machine that clones the persistent volume claim (PVC) of -an existing virtual machine into a data volume. Reference a +an existing virtual machine into a data volume. + +Reference a `dataVolumeTemplate` in the virtual machine manifest and the `source` PVC is cloned to a data volume, which is then automatically used for the creation of the virtual machine. diff --git a/modules/virt-creating-primary-cluster-udn-web.adoc b/modules/virt-creating-primary-cluster-udn-web.adoc index 2709a0cf5da3..748594134676 100644 --- a/modules/virt-creating-primary-cluster-udn-web.adoc +++ b/modules/virt-creating-primary-cluster-udn-web.adoc @@ -6,6 +6,7 @@ [id="virt-creating-primary-cluster-udn-web_{context}"] = Creating a primary cluster-scoped user-defined network by using the web console +[role="_abstract"] You can connect multiple namespaces to the same primary user-defined network (UDN) by creating a `ClusterUserDefinedNetwork` custom resource in the {product-title} web console. .Prerequisites @@ -22,4 +23,4 @@ You can connect multiple namespaces to the same primary user-defined network (UD . In the *Project(s) Match Labels* field, add the appropriate labels to select namespaces that the cluster UDN applies to. -. Click *Create*. The cluster-scoped UDN serves as the default primary network for pods and virtual machines located in namespaces that contain the labels that you specified in step 5. \ No newline at end of file +. Click *Create*. The cluster-scoped UDN serves as the default primary network for pods and virtual machines located in namespaces that contain the labels that you specified in step 5. diff --git a/modules/virt-creating-primary-udn-web.adoc b/modules/virt-creating-primary-udn-web.adoc index 195ca9e2be00..dd08d93d481d 100644 --- a/modules/virt-creating-primary-udn-web.adoc +++ b/modules/virt-creating-primary-udn-web.adoc @@ -6,6 +6,7 @@ [id="virt-creating-primary-udn-web_{context}"] = Creating a primary namespace-scoped user-defined network by using the web console +[role="_abstract"] You can create an isolated primary network in your project namespace by creating a `UserDefinedNetwork` custom resource in the {product-title} web console. .Prerequisites @@ -21,4 +22,4 @@ You can create an isolated primary network in your project namespace by creating . Specify a value in the *Subnet* field. -. Click *Create*. The user-defined network serves as the default primary network for pods and virtual machines that you create in this namespace. \ No newline at end of file +. Click *Create*. The user-defined network serves as the default primary network for pods and virtual machines that you create in this namespace. diff --git a/modules/virt-creating-pull-secret-fusion-san.adoc b/modules/virt-creating-pull-secret-fusion-san.adoc index dda9bb82cc20..33eca8c709ec 100644 --- a/modules/virt-creating-pull-secret-fusion-san.adoc +++ b/modules/virt-creating-pull-secret-fusion-san.adoc @@ -6,6 +6,7 @@ [id="creating-pull-secret-fusion-san_{context}"] = Creating a Kubernetes pull secret +[role="_abstract"] After installing the {FusionSAN} Operator, you must create a Kubernetes secret object to hold the IBM entitlement key for pulling the required container images from the IBM container registry. .Prerequisites @@ -39,4 +40,4 @@ $ oc create secret -n ibm-fusion-access generic fusion-pullsecret \ . In the {product-title} web console, navigate to *Workloads* -> *Secrets*. -. Find the `fusion-pullsecret` in the list. \ No newline at end of file +. Find the `fusion-pullsecret` in the list. diff --git a/modules/virt-creating-rbac-cloning-dvs.adoc b/modules/virt-creating-rbac-cloning-dvs.adoc index 81ff2b44a49c..d5746b98d53d 100644 --- a/modules/virt-creating-rbac-cloning-dvs.adoc +++ b/modules/virt-creating-rbac-cloning-dvs.adoc @@ -6,7 +6,8 @@ [id="virt-creating-rbac-cloning-dvs_{context}"] = Creating RBAC resources for cloning data volumes -Create a new cluster role that enables permissions for all actions for the `datavolumes` resource. +[role="_abstract"] +You can create a new cluster role that enables permissions for all actions for the `datavolumes` resource. .Prerequisites diff --git a/modules/virt-creating-secondary-localnet-udn.adoc b/modules/virt-creating-secondary-localnet-udn.adoc index 38ea69594abd..158133dd2ebe 100644 --- a/modules/virt-creating-secondary-localnet-udn.adoc +++ b/modules/virt-creating-secondary-localnet-udn.adoc @@ -6,6 +6,7 @@ [id="virt-creating-secondary-localnet-udn_{context}"] = Creating a user-defined-network for localnet topology by using the CLI +[role="_abstract"] You can create a secondary cluster-scoped user-defined-network (CUDN) for the localnet network topology by using the CLI. .Prerequisites @@ -14,9 +15,10 @@ You can create a secondary cluster-scoped user-defined-network (CUDN) for the lo * You installed the Kubernetes NMState Operator. .Procedure -. Create a `NodeNetworkConfigurationPolicy` object to map the OVN-Kubernetes secondary network to an Open vSwitch (OVS) bridge: +. Create a `NodeNetworkConfigurationPolicy` object to map the OVN-Kubernetes secondary network to an Open vSwitch (OVS) bridge. ++ +Example `NodeNetworkConfigurationPolicy` manifest: + -.Example `NodeNetworkConfigurationPolicy` manifest [source,yaml] ---- apiVersion: nmstate.io/v1 @@ -55,9 +57,10 @@ where: :: Specifies the name of your `NodeNetworkConfigurationPolicy` manifest YAML file. -. Create a `ClusterUserDefinedNetwork` object to create a localnet secondary network: +. Create a `ClusterUserDefinedNetwork` object to create a localnet secondary network. ++ +Example `ClusterUserDefinedNetwork` manifest: + -.Example `ClusterUserDefinedNetwork` manifest [source,yaml] ---- apiVersion: k8s.ovn.org/v1 diff --git a/modules/virt-creating-secondary-udn-namespace.adoc b/modules/virt-creating-secondary-udn-namespace.adoc index 1b6dc7d36834..b1afe6bb99fb 100644 --- a/modules/virt-creating-secondary-udn-namespace.adoc +++ b/modules/virt-creating-secondary-udn-namespace.adoc @@ -6,6 +6,7 @@ [id="virt-creating-secondary-udn-namespace_{context}"] = Creating a namespace for secondary user-defined networks by using the CLI +[role="_abstract"] You can create a namespace to be used with an existing secondary cluster-scoped user-defined network (CUDN) by using the CLI. .Prerequisites @@ -16,7 +17,6 @@ You can create a namespace to be used with an existing secondary cluster-scoped .Procedure . Create a `Namespace` object similar to the following example: + -.Example `Namespace` manifest [source,yaml] ---- apiVersion: v1 diff --git a/modules/virt-creating-service-cli.adoc b/modules/virt-creating-service-cli.adoc index a6de70841403..88023191440c 100644 --- a/modules/virt-creating-service-cli.adoc +++ b/modules/virt-creating-service-cli.adoc @@ -7,6 +7,7 @@ [id="virt-creating-service-cli_{context}"] = Creating a service by using the CLI +[role="_abstract"] You can create a service and associate it with a virtual machine (VM) by using the command line. .Prerequisites diff --git a/modules/virt-creating-service-virtctl.adoc b/modules/virt-creating-service-virtctl.adoc index 7482105d9269..a6279188cde1 100644 --- a/modules/virt-creating-service-virtctl.adoc +++ b/modules/virt-creating-service-virtctl.adoc @@ -6,6 +6,7 @@ [id="virt-creating-service-virtctl_{context}"] = Creating a service by using virtctl +[role="_abstract"] You can create a service for a virtual machine (VM) by using the `virtctl` command-line tool. .Prerequisites @@ -24,7 +25,7 @@ $ virtctl expose vm --name --type --port ---- <1> Specify the `ClusterIP`, `NodePort`, or `LoadBalancer` service type. + -.Example +Example: + [source,terminal] ---- @@ -38,4 +39,4 @@ $ virtctl expose vm example-vm --name example-service --type NodePort --port 22 [source,terminal] ---- $ oc get service ----- \ No newline at end of file +---- diff --git a/modules/virt-creating-service-web.adoc b/modules/virt-creating-service-web.adoc index 18584fc0ba57..c61fa10ec7e6 100644 --- a/modules/virt-creating-service-web.adoc +++ b/modules/virt-creating-service-web.adoc @@ -6,6 +6,7 @@ [id="virt-creating-service-web_{context}"] = Creating a service by using the web console +[role="_abstract"] You can create a node port or load balancer service for a virtual machine (VM) by using the {product-title} web console. .Prerequisites @@ -21,4 +22,4 @@ You can create a node port or load balancer service for a virtual machine (VM) b .Verification -* Check the *Services* pane on the *Details* tab to view the new service. \ No newline at end of file +* Check the *Services* pane on the *Details* tab to view the new service. diff --git a/modules/virt-creating-servicemonitor-resource-for-node-exporter.adoc b/modules/virt-creating-servicemonitor-resource-for-node-exporter.adoc index 687ef450d8c9..29dc8af678bb 100644 --- a/modules/virt-creating-servicemonitor-resource-for-node-exporter.adoc +++ b/modules/virt-creating-servicemonitor-resource-for-node-exporter.adoc @@ -6,6 +6,7 @@ [id="virt-creating-servicemonitor-resource-for-node-exporter_{context}"] = Creating a ServiceMonitor resource for the node exporter service +[role="_abstract"] You can use a Prometheus client library and scrape metrics from the `/metrics` endpoint to access and view the metrics exposed by the node-exporter service. Use a `ServiceMonitor` custom resource definition (CRD) to monitor the node exporter service. .Prerequisites diff --git a/modules/virt-creating-storage-class-csi-driver.adoc b/modules/virt-creating-storage-class-csi-driver.adoc index 01f0560db5dd..1ec31bf90a91 100644 --- a/modules/virt-creating-storage-class-csi-driver.adoc +++ b/modules/virt-creating-storage-class-csi-driver.adoc @@ -7,6 +7,7 @@ [id="virt-creating-storage-class-csi-driver_{context}"] = Creating a storage class for the CSI driver with the storagePools stanza +[role="_abstract"] To use the hostpath provisioner (HPP) you must create an associated storage class for the Container Storage Interface (CSI) driver. When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a `StorageClass` object's parameters after you create it. diff --git a/modules/virt-creating-storage-cluster-fusion-access-san.adoc b/modules/virt-creating-storage-cluster-fusion-access-san.adoc index 5e181c2d1831..e7842b4e8550 100644 --- a/modules/virt-creating-storage-cluster-fusion-access-san.adoc +++ b/modules/virt-creating-storage-cluster-fusion-access-san.adoc @@ -6,6 +6,7 @@ [id="creating-storage-cluster-fusion-access-san_{context}"] = Creating a storage cluster with {FusionSAN} +[role="_abstract"] Once you have installed the {FusionSAN} Operator, you can create a storage cluster with shared storage nodes. The wizard for creating the storage cluster in the {product-title} web console provides easy-to-follow steps and lists the relevant worker nodes with shared disks. @@ -33,4 +34,4 @@ You can only select worker nodes with a minimum of 20 GB of RAM from the list. . Click *Create storage cluster*. + -The page reloads, opening the {FusionSAN} page for the new storage cluster. \ No newline at end of file +The page reloads, opening the {FusionSAN} page for the new storage cluster. diff --git a/modules/virt-creating-storage-pool-pvc-template.adoc b/modules/virt-creating-storage-pool-pvc-template.adoc index eac5533075c0..700820e96a32 100644 --- a/modules/virt-creating-storage-pool-pvc-template.adoc +++ b/modules/virt-creating-storage-pool-pvc-template.adoc @@ -6,6 +6,7 @@ [id="virt-creating-storage-pool-pvc-template_{context}"] = Creating a storage pool with a PVC template +[role="_abstract"] You can create a storage pool for multiple hostpath provisioner (HPP) volumes by specifying a PVC template in the HPP custom resource (CR). [IMPORTANT] diff --git a/modules/virt-creating-template.adoc b/modules/virt-creating-template.adoc index e4c4b7cb3225..ab7b8ffe4069 100644 --- a/modules/virt-creating-template.adoc +++ b/modules/virt-creating-template.adoc @@ -6,7 +6,8 @@ [id="virt-creating-template_{context}"] = Creating a custom VM template in the web console -You create a virtual machine template by editing a YAML file example in the {product-title} web console. +[role="_abstract"] +You can create a virtual machine template by editing a YAML file example in the {product-title} web console. .Procedure diff --git a/modules/virt-creating-udn-namespace-cli.adoc b/modules/virt-creating-udn-namespace-cli.adoc index 800cd4303869..b74396f11224 100644 --- a/modules/virt-creating-udn-namespace-cli.adoc +++ b/modules/virt-creating-udn-namespace-cli.adoc @@ -6,6 +6,7 @@ [id="virt-creating-udn-namespace-cli_{context}"] = Creating a namespace for user-defined networks by using the CLI +[role="_abstract"] You can create a namespace to be used with primary user-defined networks (UDNs) by using the {oc-first}. .Prerequisites diff --git a/modules/virt-creating-udn-namespace-web.adoc b/modules/virt-creating-udn-namespace-web.adoc index 604d8be87ee9..2e5ce0a5f88a 100644 --- a/modules/virt-creating-udn-namespace-web.adoc +++ b/modules/virt-creating-udn-namespace-web.adoc @@ -6,6 +6,7 @@ [id="virt-creating-udn-namespace-web_{context}"] = Creating a namespace for user-defined networks by using the web console +[role="_abstract"] You can create a namespace to be used with primary user-defined networks (UDNs) by using the {product-title} web console. .Prerequisites @@ -25,4 +26,4 @@ You can create a namespace to be used with primary user-defined networks (UDNs) . Optional: Specify a default network policy. -. Click *Create* to create the namespace. \ No newline at end of file +. Click *Create* to create the namespace. diff --git a/modules/virt-creating-virtualmachineexport.adoc b/modules/virt-creating-virtualmachineexport.adoc index d3f0dea1b153..e14a655796f0 100644 --- a/modules/virt-creating-virtualmachineexport.adoc +++ b/modules/virt-creating-virtualmachineexport.adoc @@ -6,9 +6,12 @@ [id="virt-creating-virtualmachineexport_{context}"] = Creating a VirtualMachineExport custom resource -You can create a `VirtualMachineExport` custom resource (CR) to export the following objects: +[role="_abstract"] +You can create a `VirtualMachineExport` custom resource (CR) to export persistent volume claims (PVCs) from a `VirtualMachine`, `VirtualMachineSnapshot`, or `PersistentVolumeClaim` CR. -* Virtual machine (VM): Exports the persistent volume claims (PVCs) of a specified VM. +You can export the following objects: + +* VM: Exports the persistent volume claims of a specified VM. * VM snapshot: Exports PVCs contained in a `VirtualMachineSnapshot` CR. * PVC: Exports a PVC. If the PVC is used by another pod, such as the `virt-launcher` pod, the export remains in a `Pending` state until the PVC is no longer in use. @@ -28,9 +31,10 @@ The export server supports the following file formats: .Procedure -. Create a `VirtualMachineExport` manifest to export a volume from a `VirtualMachine`, `VirtualMachineSnapshot`, or `PersistentVolumeClaim` CR according to the following example and save it as `example-export.yaml`: +. Create a `VirtualMachineExport` manifest to export a volume from a `VirtualMachine`, `VirtualMachineSnapshot`, or `PersistentVolumeClaim` CR according to the following example and save it as `example-export.yaml`. ++ +`VirtualMachineExport` example: + -.`VirtualMachineExport` example [source,yaml] ---- apiVersion: export.kubevirt.io/v1beta1 @@ -68,7 +72,8 @@ $ oc get vmexport example-export -o yaml + The internal and external links for the exported volumes are displayed in the `status` stanza: + -.Output example +Output example: ++ [source,yaml] ---- apiVersion: export.kubevirt.io/v1beta1 diff --git a/modules/virt-creating-vm-container-disk-cli.adoc b/modules/virt-creating-vm-container-disk-cli.adoc index db84421dc2d5..93d2c3f4d9a2 100644 --- a/modules/virt-creating-vm-container-disk-cli.adoc +++ b/modules/virt-creating-vm-container-disk-cli.adoc @@ -6,6 +6,7 @@ [id="virt-creating-vm-import-cli_{context}"] = Creating a VM from a container disk by using the CLI +[role="_abstract"] You can create a virtual machine (VM) from a container disk by using the command line. .Prerequisites @@ -71,9 +72,8 @@ $ oc create -f .yaml $ oc get vm ---- + -If the provisioning is successful, the VM status is `Running`: +If the provisioning is successful, the VM status is `Running`. Example output: + -.Example output [source,terminal] ---- NAME AGE STATUS READY @@ -89,7 +89,6 @@ $ virtctl console + If the VM is running and the serial console is accessible, the output looks as follows: + -.Example output [source,terminal] ---- Successfully connected to vm-rhel-9 console. The escape sequence is ^] diff --git a/modules/virt-creating-vm-custom-image-web.adoc b/modules/virt-creating-vm-custom-image-web.adoc index f1524394261f..e1a88025012c 100644 --- a/modules/virt-creating-vm-custom-image-web.adoc +++ b/modules/virt-creating-vm-custom-image-web.adoc @@ -31,9 +31,11 @@ endif::[] = Creating a VM {title-frag} by using the web console ifdef::url,container-disks[] +[role="_abstract"] You can create a virtual machine (VM) by importing {a-object} from a {data-source} by using the {product-title} web console. endif::[] ifdef::clone[] +[role="_abstract"] You can create a virtual machine (VM) by cloning a persistent volume claim (PVC) by using the {product-title} web console. endif::[] diff --git a/modules/virt-creating-vm-from-snapshot-web.adoc b/modules/virt-creating-vm-from-snapshot-web.adoc index 12cd3b868b7c..7ff4b7eda778 100644 --- a/modules/virt-creating-vm-from-snapshot-web.adoc +++ b/modules/virt-creating-vm-from-snapshot-web.adoc @@ -6,6 +6,7 @@ [id="virt-creating-vm-from-snapshot-web_{context}"] = Creating a VM from an existing snapshot by using the web console +[role="_abstract"] You can create a new VM by copying an existing snapshot. .Procedure @@ -17,4 +18,4 @@ You can create a new VM by copying an existing snapshot. . Select *Create VirtualMachine*. . Enter the name of the virtual machine. . (Optional) Select the *Start this VirtualMachine after creation* checkbox to start the new virtual machine. -. Click *Create*. \ No newline at end of file +. Click *Create*. diff --git a/modules/virt-creating-vm-from-template.adoc b/modules/virt-creating-vm-from-template.adoc index 74a6e6d52153..7fb911e0110a 100644 --- a/modules/virt-creating-vm-from-template.adoc +++ b/modules/virt-creating-vm-from-template.adoc @@ -6,6 +6,7 @@ [id="virt-creating-vm-from-template_{context}"] = Creating a VM from a template +[role="_abstract"] You can create a virtual machine (VM) from a template with an available boot source by using the {product-title} web console. You can customize template or VM parameters, such as data sources, Cloud-init, or SSH keys, before you start the VM. You can choose between two views in the web console to create the VM: diff --git a/modules/virt-creating-vm-instancetype.adoc b/modules/virt-creating-vm-instancetype.adoc index 91398548df25..a872b7bd480d 100644 --- a/modules/virt-creating-vm-instancetype.adoc +++ b/modules/virt-creating-vm-instancetype.adoc @@ -21,15 +21,18 @@ endif::[] = Creating a VM from an instance type by using the web console ifdef::virt-create-vms[] +[role="_abstract"] You can create a virtual machine (VM) from an instance type by using the {product-title} web console. You can also use the web console to create a VM by copying an existing snapshot or to clone a VM. You can create a VM from a list of available bootable volumes. You can add Linux- or Windows-based volumes to the list. endif::[] ifdef::static-key[] +[role="_abstract"] You can add a statically managed SSH key when you create a virtual machine (VM) from an instance type by using the {product-title} web console. The key is added to the VM as a cloud-init data source at first boot. This method does not affect cloud-init user data. endif::[] ifdef::dynamic-key[] +[role="_abstract"] You can enable dynamic SSH key injection when you create a virtual machine (VM) from an instance type by using the {product-title} web console. Then, you can add or revoke the key at runtime. [NOTE] @@ -118,6 +121,7 @@ endif::[] . Optional: Click *View YAML & CLI* to view the YAML file. Click *CLI* to view the CLI commands. You can also download or copy either the YAML file contents or the CLI commands. . Click *Create VirtualMachine*. +.Result After the VM is created, you can monitor the status on the *VirtualMachine details* page. @@ -129,4 +133,4 @@ ifeval::["{context}" == "static-key"] endif::[] ifeval::["{context}" == "dynamic-key"] :!dynamic-key: -endif::[] \ No newline at end of file +endif::[] diff --git a/modules/virt-creating-vm-snapshot-cli.adoc b/modules/virt-creating-vm-snapshot-cli.adoc index 818704909ee0..d38908e35179 100644 --- a/modules/virt-creating-vm-snapshot-cli.adoc +++ b/modules/virt-creating-vm-snapshot-cli.adoc @@ -6,6 +6,7 @@ [id="virt-creating-vm-snapshot-cli_{context}"] = Creating a snapshot by using the CLI +[role="_abstract"] You can create a virtual machine (VM) snapshot for an offline or online VM by creating a `VirtualMachineSnapshot` object. .Prerequisites @@ -17,7 +18,8 @@ You can create a virtual machine (VM) snapshot for an offline or online VM by cr $ oc get kubevirt kubevirt-hyperconverged -n {CNVNamespace} -o yaml ---- + -.Truncated output +Truncated output: ++ [source,yaml] ---- spec: @@ -93,7 +95,8 @@ If you do not specify a unit of time such as `m` or `s`, the default is seconds $ oc describe vmsnapshot ---- + -.Example output +Example output: ++ [source,yaml] ---- apiVersion: snapshot.kubevirt.io/v1beta1 @@ -146,4 +149,4 @@ status: <5> Specifies additional information about the snapshot, such as whether it is an online snapshot, or whether it was created with QEMU guest agent running. <6> Lists the storage volumes that are part of the snapshot, as well as their parameters. -. Check the `includedVolumes` section in the snapshot description to verify that the expected PVCs are included in the snapshot. \ No newline at end of file +. Check the `includedVolumes` section in the snapshot description to verify that the expected PVCs are included in the snapshot. diff --git a/modules/virt-creating-vm-snapshot-web.adoc b/modules/virt-creating-vm-snapshot-web.adoc index 134f00f18a2c..0bba61a9a6b9 100644 --- a/modules/virt-creating-vm-snapshot-web.adoc +++ b/modules/virt-creating-vm-snapshot-web.adoc @@ -6,6 +6,7 @@ [id="virt-creating-vm-snapshot-web_{context}"] = Creating a snapshot by using the web console +[role="_abstract"] You can create a snapshot of a virtual machine (VM) by using the {product-title} web console. .Prerequisites diff --git a/modules/virt-creating-vm-uploaded-image-web.adoc b/modules/virt-creating-vm-uploaded-image-web.adoc index 86f2b3011d04..f56e145f0e56 100644 --- a/modules/virt-creating-vm-uploaded-image-web.adoc +++ b/modules/virt-creating-vm-uploaded-image-web.adoc @@ -6,6 +6,7 @@ [id="virt-creating-vm-uploaded-image-web_{context}"] = Creating a VM from an uploaded image by using the web console +[role="_abstract"] You can create a virtual machine (VM) from an uploaded operating system image by using the {product-title} web console. .Prerequisites @@ -20,4 +21,4 @@ You can create a virtual machine (VM) from an uploaded operating system image by . On the *Customize template parameters* page, expand *Storage* and select *Upload (Upload a new file to a PVC)* from the *Disk source* list. . Browse to the image on your local machine and set the disk size. . Click *Customize VirtualMachine*. -. Click *Create VirtualMachine*. \ No newline at end of file +. Click *Create VirtualMachine*. diff --git a/modules/virt-creating-vm-web-page-cli.adoc b/modules/virt-creating-vm-web-page-cli.adoc index 8e5db79514c6..161b5d1ac6ab 100644 --- a/modules/virt-creating-vm-web-page-cli.adoc +++ b/modules/virt-creating-vm-web-page-cli.adoc @@ -6,6 +6,7 @@ [id="virt-creating-vm-import-cli_{context}"] = Creating a VM from an image on a web page by using the CLI +[role="_abstract"] You can create a virtual machine (VM) from an image on a web page by using the command line. When the VM is created, the data volume with the image is imported into persistent storage. @@ -95,9 +96,10 @@ $ oc get pods $ oc get dv ---- + -If the provisioning is successful, the data volume phase is `Succeeded`: +If the provisioning is successful, the data volume phase is `Succeeded`. ++ +Example output: + -.Example output [source,terminal] ---- NAME PHASE PROGRESS RESTARTS AGE @@ -113,7 +115,6 @@ $ virtctl console + If the VM is running and the serial console is accessible, the output looks as follows: + -.Example output [source,terminal] ---- Successfully connected to vm-rhel-9 console. The escape sequence is ^] diff --git a/modules/virt-creating-windows-vm.adoc b/modules/virt-creating-windows-vm.adoc index 245d3bfe692d..1e29bb40a054 100644 --- a/modules/virt-creating-windows-vm.adoc +++ b/modules/virt-creating-windows-vm.adoc @@ -6,6 +6,7 @@ [id="virt-creating-windows-vm_{context}"] = Creating a Windows VM +[role="_abstract"] You can create a Windows virtual machine (VM) by uploading a Windows image to a persistent volume claim (PVC) and then cloning the PVC when you create a VM by using the {product-title} web console. .Prerequisites diff --git a/modules/virt-customizing-storage-profile-default-cloning-strategy.adoc b/modules/virt-customizing-storage-profile-default-cloning-strategy.adoc index 03dc5964e97a..bb0ca56d36c5 100644 --- a/modules/virt-customizing-storage-profile-default-cloning-strategy.adoc +++ b/modules/virt-customizing-storage-profile-default-cloning-strategy.adoc @@ -6,7 +6,8 @@ [id="virt-customizing-storage-profile-default-cloning-strategy_{context}"] = Setting a default cloning strategy by using a storage profile -You can use storage profiles to set a default cloning method for a storage class by creating a cloning strategy. Setting cloning strategies can be helpful, for example, if your storage vendor supports only certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance. +[role="_abstract"] +You can use storage profiles to set a default cloning method for a storage class by creating a cloning strategy. This can be helpful, for example, if your storage vendor supports only certain cloning methods. It also allows you to select a method that limits resource usage or maximizes performance. Cloning strategies are specified by setting the `cloneStrategy` attribute in a storage profile to one of the following values: @@ -19,7 +20,8 @@ Cloning strategies are specified by setting the `cloneStrategy` attribute in a s You can set clone strategies using the CLI without modifying the default `claimPropertySets` in your YAML `spec` section. ==== -.Example storage profile +Example storage profile: + [source,yaml] ---- apiVersion: cdi.kubevirt.io/v1beta1 diff --git a/modules/virt-default-cluster-roles.adoc b/modules/virt-default-cluster-roles.adoc index 3729450b1ec1..cb82c2e3ea1b 100644 --- a/modules/virt-default-cluster-roles.adoc +++ b/modules/virt-default-cluster-roles.adoc @@ -6,6 +6,7 @@ [id="default-cluster-roles-for-virt_{context}"] = Default cluster roles for {VirtProductName} +[role="_abstract"] By using cluster role aggregation, {VirtProductName} extends the default {product-title} cluster roles to include permissions for accessing virtualization objects. Roles unique to {VirtProductName} are not aggregated with {product-title} roles. .{VirtProductName} cluster roles @@ -30,4 +31,4 @@ By using cluster role aggregation, {VirtProductName} extends the default {produc .^| `N/A` .^|`kubevirt.io:migrate` | A user that can create, delete, and update VM live migration requests, which are represented by namespaced `VirtualMachineInstanceMigration` (VMIM) objects. This role is specific to {VirtProductName}. -|=== \ No newline at end of file +|=== diff --git a/modules/virt-define-guest-agent-ping-probe.adoc b/modules/virt-define-guest-agent-ping-probe.adoc index 66d89c769599..1d2a29644b62 100644 --- a/modules/virt-define-guest-agent-ping-probe.adoc +++ b/modules/virt-define-guest-agent-ping-probe.adoc @@ -7,7 +7,8 @@ = Defining a guest agent ping probe -Define a guest agent ping probe by setting the `spec.readinessProbe.guestAgentPing` field of the virtual machine (VM) configuration. +[role="_abstract"] +You can define a guest agent ping probe by setting the `spec.readinessProbe.guestAgentPing` field of the virtual machine (VM) configuration. .Prerequisites @@ -18,8 +19,6 @@ Define a guest agent ping probe by setting the `spec.readinessProbe.guestAgentPi . Include details of the guest agent ping probe in the VM configuration file. For example: + - -.Sample guest agent ping probe [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-define-http-liveness-probe.adoc b/modules/virt-define-http-liveness-probe.adoc index 9ece1d023dff..e3742f80361f 100644 --- a/modules/virt-define-http-liveness-probe.adoc +++ b/modules/virt-define-http-liveness-probe.adoc @@ -7,6 +7,7 @@ = Defining an HTTP liveness probe +[role="_abstract"] Define an HTTP liveness probe by setting the `spec.livenessProbe.httpGet` field of the virtual machine (VM) configuration. You can define both HTTP and TCP tests for liveness probes in the same way as readiness probes. This procedure configures a sample liveness probe with an HTTP GET test. .Prerequisites @@ -17,8 +18,8 @@ Define an HTTP liveness probe by setting the `spec.livenessProbe.httpGet` field . Include details of the HTTP liveness probe in the VM configuration file. + - -.Sample liveness probe with an HTTP GET test +Sample liveness probe with an HTTP GET test: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-define-http-readiness-probe.adoc b/modules/virt-define-http-readiness-probe.adoc index 008c524ef3ab..c4d0e27e92de 100644 --- a/modules/virt-define-http-readiness-probe.adoc +++ b/modules/virt-define-http-readiness-probe.adoc @@ -7,7 +7,8 @@ = Defining an HTTP readiness probe -Define an HTTP readiness probe by setting the `spec.readinessProbe.httpGet` field of the virtual machine (VM) configuration. +[role="_abstract"] +You can define an HTTP readiness probe by setting the `spec.readinessProbe.httpGet` field of the virtual machine (VM) configuration. .Prerequisites * You have installed the {oc-first}. @@ -15,8 +16,8 @@ Define an HTTP readiness probe by setting the `spec.readinessProbe.httpGet` fiel .Procedure . Include details of the readiness probe in the VM configuration file. + - -.Sample readiness probe with an HTTP GET test +Sample readiness probe with an HTTP GET test: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-define-tcp-readiness-probe.adoc b/modules/virt-define-tcp-readiness-probe.adoc index 97c07208f86d..f00256663f89 100644 --- a/modules/virt-define-tcp-readiness-probe.adoc +++ b/modules/virt-define-tcp-readiness-probe.adoc @@ -7,7 +7,8 @@ = Defining a TCP readiness probe -Define a TCP readiness probe by setting the `spec.readinessProbe.tcpSocket` field of the virtual machine (VM) configuration. +[role="_abstract"] +You can define a TCP readiness probe by setting the `spec.readinessProbe.tcpSocket` field of the virtual machine (VM) configuration. .Prerequisites @@ -17,8 +18,8 @@ Define a TCP readiness probe by setting the `spec.readinessProbe.tcpSocket` fiel . Include details of the TCP readiness probe in the VM configuration file. + - -.Sample readiness probe with a TCP socket test +Sample readiness probe with a TCP socket test: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-defining-storageclass.adoc b/modules/virt-defining-storageclass.adoc index 7e081e9c95ef..0538d724348c 100644 --- a/modules/virt-defining-storageclass.adoc +++ b/modules/virt-defining-storageclass.adoc @@ -6,6 +6,7 @@ [id="virt-defining-storageclass_{context}"] = Defining a storage class +[role="_abstract"] You can define the storage class that the Containerized Data Importer (CDI) uses when allocating scratch space by adding the `spec.scratchSpaceStorageClass` field to the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-defining-watchdog-device-vm.adoc b/modules/virt-defining-watchdog-device-vm.adoc index f6eba87db8f7..caf3690e4b3c 100644 --- a/modules/virt-defining-watchdog-device-vm.adoc +++ b/modules/virt-defining-watchdog-device-vm.adoc @@ -6,6 +6,7 @@ [id="virt-defining-watchdog-device-vm"] = Configuring a watchdog device for the virtual machine +[role="_abstract"] You configure a watchdog device for the virtual machine (VM). .Prerequisites diff --git a/modules/virt-delete-vm-web.adoc b/modules/virt-delete-vm-web.adoc index 9d7dec285d5b..d89f1bf5ec51 100644 --- a/modules/virt-delete-vm-web.adoc +++ b/modules/virt-delete-vm-web.adoc @@ -7,6 +7,7 @@ = Deleting a virtual machine using the web console +[role="_abstract"] Deleting a virtual machine (VM) permanently removes it from the cluster. If the VM is delete protected, the *Delete* action is disabled in the VM's *Actions* menu. diff --git a/modules/virt-deleting-deployment-custom-resource.adoc b/modules/virt-deleting-deployment-custom-resource.adoc index 38843e415053..76a58f2cb53c 100644 --- a/modules/virt-deleting-deployment-custom-resource.adoc +++ b/modules/virt-deleting-deployment-custom-resource.adoc @@ -6,6 +6,7 @@ [id="virt-deleting-deployment-custom-resource_{context}"] = Deleting the HyperConverged custom resource +[role="_abstract"] To uninstall {VirtProductName}, you first delete the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-deleting-virt-cli.adoc b/modules/virt-deleting-virt-cli.adoc index 99a7039fcc40..aee5c2accbcb 100644 --- a/modules/virt-deleting-virt-cli.adoc +++ b/modules/virt-deleting-virt-cli.adoc @@ -6,6 +6,7 @@ [id="virt-deleting-virt-cli_{context}"] = Uninstalling {VirtProductName} by using the CLI +[role="_abstract"] You can uninstall {VirtProductName} by using the OpenShift CLI (`oc`). .Prerequisites @@ -51,7 +52,8 @@ $ oc delete namespace openshift-cnv $ oc delete crd --dry-run=client -l operators.coreos.com/kubevirt-hyperconverged.{CNVNamespace} ---- + -.Example output +Example output: ++ ---- customresourcedefinition.apiextensions.k8s.io "cdis.cdi.kubevirt.io" deleted (dry run) customresourcedefinition.apiextensions.k8s.io "hostpathprovisioners.hostpathprovisioner.kubevirt.io" deleted (dry run) diff --git a/modules/virt-deleting-virt-crds-web.adoc b/modules/virt-deleting-virt-crds-web.adoc index e115fc2648e1..cacc7da0a1a8 100644 --- a/modules/virt-deleting-virt-crds-web.adoc +++ b/modules/virt-deleting-virt-crds-web.adoc @@ -6,6 +6,7 @@ [id="virt-deleting-virt-crds-web_{context}"] = Deleting {VirtProductName} custom resource definitions +[role="_abstract"] You can delete the {VirtProductName} custom resource definitions (CRDs) by using the web console. .Prerequisites @@ -18,4 +19,4 @@ You can delete the {VirtProductName} custom resource definitions (CRDs) by using . Select the *Label* filter and enter `operators.coreos.com/kubevirt-hyperconverged.openshift-cnv` in the *Search* field to display the {VirtProductName} CRDs. -. Click the Options menu {kebab} beside each CRD and select *Delete CustomResourceDefinition*. \ No newline at end of file +. Click the Options menu {kebab} beside each CRD and select *Delete CustomResourceDefinition*. diff --git a/modules/virt-deleting-vm-snapshot-cli.adoc b/modules/virt-deleting-vm-snapshot-cli.adoc index 5b5055c6d042..0a5f9e99ca07 100644 --- a/modules/virt-deleting-vm-snapshot-cli.adoc +++ b/modules/virt-deleting-vm-snapshot-cli.adoc @@ -6,6 +6,7 @@ [id="virt-deleting-vm-snapshot-cli_{context}"] = Deleting a virtual machine snapshot in the CLI +[role="_abstract"] You can delete an existing virtual machine (VM) snapshot by deleting the appropriate `VirtualMachineSnapshot` object. .Prerequisites diff --git a/modules/virt-deleting-vm-snapshot-web.adoc b/modules/virt-deleting-vm-snapshot-web.adoc index c25bc0a9e346..6851d9da5a7b 100644 --- a/modules/virt-deleting-vm-snapshot-web.adoc +++ b/modules/virt-deleting-vm-snapshot-web.adoc @@ -6,6 +6,7 @@ [id="virt-deleting-vm-snapshot-web_{context}"] = Deleting a snapshot by using the web console +[role="_abstract"] You can delete an existing virtual machine (VM) snapshot by using the web console. .Procedure diff --git a/modules/virt-deleting-vmis-cli.adoc b/modules/virt-deleting-vmis-cli.adoc index 5bebd5d67df9..90d657762041 100644 --- a/modules/virt-deleting-vmis-cli.adoc +++ b/modules/virt-deleting-vmis-cli.adoc @@ -7,6 +7,7 @@ = Deleting a standalone virtual machine instance using the CLI +[role="_abstract"] You can delete a standalone virtual machine instance (VMI) by using the `oc` command-line interface (CLI). .Prerequisites diff --git a/modules/virt-deleting-vmis-web.adoc b/modules/virt-deleting-vmis-web.adoc index 0af0eca6d0b0..dde186b4ad66 100644 --- a/modules/virt-deleting-vmis-web.adoc +++ b/modules/virt-deleting-vmis-web.adoc @@ -6,7 +6,8 @@ [id="virt-deleting-vmis-web_{context}"] = Deleting a standalone virtual machine instance using the web console -Delete a standalone virtual machine instance (VMI) from the web console. +[role="_abstract"] +You can delete a standalone virtual machine instance (VMI) from the web console. .Procedure diff --git a/modules/virt-deleting-vms.adoc b/modules/virt-deleting-vms.adoc index ec818bbdaef9..80de5cbfe6ad 100644 --- a/modules/virt-deleting-vms.adoc +++ b/modules/virt-deleting-vms.adoc @@ -7,6 +7,7 @@ = Deleting a virtual machine by using the CLI +[role="_abstract"] You can delete a virtual machine (VM) by using the `oc` command-line interface (CLI). The `oc` client enables you to perform actions on multiple VMs. .Prerequisites diff --git a/modules/virt-deploying-libguestfs-with-virtctl.adoc b/modules/virt-deploying-libguestfs-with-virtctl.adoc index 4b0440743a59..d6af960786fd 100644 --- a/modules/virt-deploying-libguestfs-with-virtctl.adoc +++ b/modules/virt-deploying-libguestfs-with-virtctl.adoc @@ -6,6 +6,7 @@ [id="virt-deploying-libguestfs-with-virtctl_{context}"] = Deploying libguestfs by using virtctl +[role="_abstract"] You can use the `virtctl guestfs` command to deploy an interactive container with `libguestfs-tools` and a persistent volume claim (PVC) attached to it. .Procedure diff --git a/modules/virt-deploying-operator-cli.adoc b/modules/virt-deploying-operator-cli.adoc index eff25338a58b..72350e6ee380 100644 --- a/modules/virt-deploying-operator-cli.adoc +++ b/modules/virt-deploying-operator-cli.adoc @@ -6,6 +6,7 @@ [id="virt-deploying-operator-cli_{context}"] = Deploying the {VirtProductName} Operator by using the CLI +[role="_abstract"] You can deploy the {VirtProductName} Operator by using the `oc` CLI. .Prerequisites @@ -50,7 +51,6 @@ $ watch oc get csv -n {CNVNamespace} + The following output displays if deployment was successful: + -.Example output [source,terminal,subs="attributes+"] ---- NAME DISPLAY VERSION REPLACES PHASE diff --git a/modules/virt-deploying-ssp.adoc b/modules/virt-deploying-ssp.adoc index dbcfff3e75ce..fbf11364bada 100644 --- a/modules/virt-deploying-ssp.adoc +++ b/modules/virt-deploying-ssp.adoc @@ -6,6 +6,7 @@ [id="virt-deploying-ssp_{context}"] = Deploying the Scheduling, Scale, and Performance (SSP) resources +[role="_abstract"] The SSP Operator example Tekton Tasks and Pipelines are not deployed by default when you install {VirtProductName}. To deploy the SSP Operator's Tekton resources, enable the `deployTektonTaskResources` feature gate in the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-deprecated-tasks-web.adoc b/modules/virt-deprecated-tasks-web.adoc index 73a6cd8b1b4d..8ffd6f95fafd 100644 --- a/modules/virt-deprecated-tasks-web.adoc +++ b/modules/virt-deprecated-tasks-web.adoc @@ -6,6 +6,7 @@ [id="virt-deprecated-tasks.web_{context}"] = Removing deprecated or unused resources +[role="_abstract"] You can clean up deprecated or unused resources associated with the {pipelines-title} Operator. .Procedure @@ -22,4 +23,4 @@ $ oc delete clusterroles,rolebindings,serviceaccounts,configmaps,pipelines,tasks --all-namespaces ---- + -If the {pipelines-title} Operator custom resource definitions (CRDs) have already been removed, the command may return an error. You can safely ignore this, as all other matching resources will still be deleted. \ No newline at end of file +If the {pipelines-title} Operator custom resource definitions (CRDs) have already been removed, the command may return an error. You can safely ignore this, as all other matching resources will still be deleted. diff --git a/modules/virt-disable-CPU-VM-hotplug-instancetype.adoc b/modules/virt-disable-CPU-VM-hotplug-instancetype.adoc index 36544369dc17..a4609607edad 100644 --- a/modules/virt-disable-CPU-VM-hotplug-instancetype.adoc +++ b/modules/virt-disable-CPU-VM-hotplug-instancetype.adoc @@ -18,9 +18,10 @@ When a VM is created by using an instance type where the CPU hot plug is disable .Procedure -. Create a YAML file for a `VirtualMachineClusterInstancetype` custom resource (CR). Add a `maxSockets` spec to the instance type that you want to configure: +. Create a YAML file for a `VirtualMachineClusterInstancetype` custom resource (CR). Add a `maxSockets` spec to the instance type that you want to configure. ++ +Example `VirtualMachineClusterInstancetype` CR: + -.Example `VirtualMachineClusterInstancetype` CR [source,yaml] ---- apiVersion: instancetype.kubevirt.io/v1beta1 diff --git a/modules/virt-disable-auto-updates-single-boot-source.adoc b/modules/virt-disable-auto-updates-single-boot-source.adoc index 968c5fb4e8d8..819e34ba87f0 100644 --- a/modules/virt-disable-auto-updates-single-boot-source.adoc +++ b/modules/virt-disable-auto-updates-single-boot-source.adoc @@ -7,6 +7,7 @@ [id="virt-disable-auto-updates-single-boot-source_{context}"] = Disabling automatic updates for a single boot source +[role="_abstract"] You can disable automatic updates for an individual boot source, whether it is custom or system-defined, by editing the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-disabling-tls-for-registry.adoc b/modules/virt-disabling-tls-for-registry.adoc index 4390e6d5ad23..08e0548600f0 100644 --- a/modules/virt-disabling-tls-for-registry.adoc +++ b/modules/virt-disabling-tls-for-registry.adoc @@ -6,6 +6,7 @@ [id="virt-disabling-tls-for-registry_{context}"] = Disabling TLS for a container registry +[role="_abstract"] You can disable TLS (transport layer security) for one or more container registries by editing the `insecureRegistries` field of the `HyperConverged` custom resource. .Prerequisites @@ -23,7 +24,8 @@ $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} . Add a list of insecure registries to the `spec.storageImport.insecureRegistries` field. + -.Example `HyperConverged` custom resource +Example `HyperConverged` custom resource: ++ [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1beta1 diff --git a/modules/virt-discovering-vm-internal-fqdn.adoc b/modules/virt-discovering-vm-internal-fqdn.adoc index 7c94dfc0495a..ab00ccf33011 100644 --- a/modules/virt-discovering-vm-internal-fqdn.adoc +++ b/modules/virt-discovering-vm-internal-fqdn.adoc @@ -6,6 +6,7 @@ [id="virt-discovering-vm-internal-fqdn_{context}"] = Mapping a virtual machine to a headless service by using the CLI +[role="_abstract"] To connect to a virtual machine (VM) from within the cluster by using its internal fully qualified domain name (FQDN), you must first map the VM to a headless service. Set the `spec.hostname` and `spec.subdomain` parameters in the VM configuration file. If a headless service exists with a name that matches the subdomain, a unique DNS A record is created for the VM in the form of `...svc.cluster.local`. @@ -23,7 +24,8 @@ If a headless service exists with a name that matches the subdomain, a unique DN $ oc edit vm ---- + -.Example `VirtualMachine` manifest file +Example `VirtualMachine` manifest file: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-dpdk-config-map-parameters.adoc b/modules/virt-dpdk-config-map-parameters.adoc index c151f2cf7631..3f4ec23eb3ca 100644 --- a/modules/virt-dpdk-config-map-parameters.adoc +++ b/modules/virt-dpdk-config-map-parameters.adoc @@ -6,7 +6,8 @@ [id="virt-dpdk-config-map-parameters_{context}"] = DPDK checkup config map parameters -The following table shows the mandatory and optional parameters that you can set in the `data` stanza of the input `ConfigMap` manifest when you run a cluster DPDK readiness checkup: +[role="_abstract"] +The following table shows the mandatory and optional parameters that you can set in the `data` stanza of the input `ConfigMap` manifest when you run a cluster DPDK readiness checkup. .DPDK checkup config map input parameters [cols="1,1,1", options="header"] diff --git a/modules/virt-dual-stack-support-services.adoc b/modules/virt-dual-stack-support-services.adoc index 5471a9755973..e88d537a6b6e 100644 --- a/modules/virt-dual-stack-support-services.adoc +++ b/modules/virt-dual-stack-support-services.adoc @@ -7,6 +7,7 @@ [id="virt-dual-stack-support-services_{context}"] = Dual-stack support +[role="_abstract"] If IPv4 and IPv6 dual-stack networking is enabled for your cluster, you can create a service that uses IPv4, IPv6, or both, by defining the `spec.ipFamilyPolicy` and the `spec.ipFamilies` fields in the `Service` object. The `spec.ipFamilyPolicy` field can be set to one of the following values: diff --git a/modules/virt-dv-annotations.adoc b/modules/virt-dv-annotations.adoc index d3c3d5f444d7..f926c75060ab 100644 --- a/modules/virt-dv-annotations.adoc +++ b/modules/virt-dv-annotations.adoc @@ -6,10 +6,13 @@ [id="virt-dv-annotations_{context}"] = Example: Data volume annotations +[role="_abstract"] This example shows how you can configure data volume (DV) annotations to control which network the importer pod uses. The `v1.multus-cni.io/default-network: bridge-network` annotation causes the pod to use the multus network named `bridge-network` as its default network. + If you want the importer pod to use both the default network from the cluster and the secondary multus network, use the `k8s.v1.cni.cncf.io/networks: ` annotation. .Multus network annotation example +==== [source,yaml] ---- apiVersion: cdi.kubevirt.io/v1beta1 @@ -21,3 +24,4 @@ metadata: # ... ---- <1> Multus network annotation +==== diff --git a/modules/virt-early-access-releases.adoc b/modules/virt-early-access-releases.adoc index 86feca8f9eb6..fa1298f2ffaa 100644 --- a/modules/virt-early-access-releases.adoc +++ b/modules/virt-early-access-releases.adoc @@ -6,7 +6,10 @@ [id="virt-early-access-releases_{context}"] = Early access releases -You can gain access to builds in development by subscribing to the *candidate* update channel for your version of {VirtProductName}. These releases have not been fully tested by Red{nbsp}Hat and are not supported, but you can use them on non-production clusters to test capabilities and bug fixes being developed for that version. +[role="_abstract"] +You can gain access to builds in development by subscribing to the *candidate* update channel for your version of {VirtProductName}. + +These releases have not been fully tested by Red{nbsp}Hat and are not supported, but you can use them on non-production clusters to test capabilities and bug fixes being developed for that version. The *stable* channel, which matches the underlying {product-title} version and is fully tested, is suitable for production systems. You can switch between the *stable* and *candidate* channels in Operator Hub. However, updating from a *candidate* channel release to a *stable* channel release is not tested by Red{nbsp}Hat. @@ -15,4 +18,4 @@ Some candidate releases are promoted to the *stable* channel. However, releases [IMPORTANT] ==== The candidate channel is only suitable for testing purposes where destroying and recreating a cluster is acceptable. -==== \ No newline at end of file +==== diff --git a/modules/virt-edit-boot-order-web.adoc b/modules/virt-edit-boot-order-web.adoc index 3277f02b3fa1..bf2c6b248bec 100644 --- a/modules/virt-edit-boot-order-web.adoc +++ b/modules/virt-edit-boot-order-web.adoc @@ -6,7 +6,8 @@ [id="virt-edit-boot-order-web_{context}"] = Editing a boot order list in the web console -Edit the boot order list in the web console. +[role="_abstract"] +You can edit the boot order list in the web console. .Procedure @@ -25,7 +26,7 @@ Edit the boot order list in the web console. * If you use a screen reader, press the Up Arrow key or Down Arrow key to move the item in the boot order list. Then, press the *Tab* key to drop the item in a location of your choice. . Click *Save*. - ++ [NOTE] ==== If the virtual machine is running, changes to the boot order list will not take effect until you restart the virtual machine. diff --git a/modules/virt-edit-boot-order-yaml-web.adoc b/modules/virt-edit-boot-order-yaml-web.adoc index 49ae393ec795..d8b1e51c9871 100644 --- a/modules/virt-edit-boot-order-yaml-web.adoc +++ b/modules/virt-edit-boot-order-yaml-web.adoc @@ -7,7 +7,8 @@ [id="virt-edit-boot-order-yaml-web_{context}"] = Editing a boot order list in the YAML configuration file -Edit the boot order list in a YAML configuration file by using the CLI. +[role="_abstract"] +You can edit the boot order list in a YAML configuration file by using the CLI. .Prerequisites diff --git a/modules/virt-editing-vm-cli.adoc b/modules/virt-editing-vm-cli.adoc index df21ca72d58c..ead4927c75e3 100644 --- a/modules/virt-editing-vm-cli.adoc +++ b/modules/virt-editing-vm-cli.adoc @@ -6,6 +6,7 @@ [id="virt-editing-vm-cli_{context}"] = Editing a virtual machine by using the CLI +[role="_abstract"] You can edit a virtual machine (VM) by using the command line. .Prerequisites diff --git a/modules/virt-editing-vm-dynamic-key-injection.adoc b/modules/virt-editing-vm-dynamic-key-injection.adoc index 3666662e94d2..6562aea9578d 100644 --- a/modules/virt-editing-vm-dynamic-key-injection.adoc +++ b/modules/virt-editing-vm-dynamic-key-injection.adoc @@ -6,6 +6,7 @@ [id="virt-editing-vm-dynamic-key-injection_{context}"] = Enabling dynamic SSH key injection by using the web console +[role="_abstract"] You can enable dynamic key injection for a virtual machine (VM) by using the {product-title} web console. Then, you can update the public SSH key at runtime. The key is added to the VM by the QEMU guest agent, which is installed with {op-system-base-full} 9. diff --git a/modules/virt-editing-vm-yaml-web.adoc b/modules/virt-editing-vm-yaml-web.adoc index eaea4fa525b9..be316f93282b 100644 --- a/modules/virt-editing-vm-yaml-web.adoc +++ b/modules/virt-editing-vm-yaml-web.adoc @@ -13,6 +13,7 @@ endif::[] = Editing a {object} YAML configuration using the web console +[role="_abstract"] You can edit the YAML configuration of a {object} in the web console. Some parameters cannot be modified. If you click *Save* with an invalid configuration, an error message indicates the parameter that cannot be changed. ifdef::virt-edit-vms[] @@ -36,6 +37,7 @@ Navigating away from the YAML screen while editing cancels any changes to the co . Edit the file and click *Save*. +.Result A confirmation message shows that the modification has been successful and includes the updated version number for the object. //Ending conditional expressions diff --git a/modules/virt-editing-vmis-web.adoc b/modules/virt-editing-vmis-web.adoc index 01252c92421a..c2e0ecc64880 100644 --- a/modules/virt-editing-vmis-web.adoc +++ b/modules/virt-editing-vmis-web.adoc @@ -6,6 +6,7 @@ [id="virt-editing-vmis-web_{context}"] = Editing a standalone virtual machine instance using the web console +[role="_abstract"] You can edit the annotations and labels of a standalone virtual machine instance (VMI) using the web console. Other fields are not editable. .Procedure @@ -16,4 +17,4 @@ You can edit the annotations and labels of a standalone virtual machine instance . On the *Details* tab, click the pencil icon beside *Annotations* or *Labels*. -. Make the relevant changes and click *Save*. \ No newline at end of file +. Make the relevant changes and click *Save*. diff --git a/modules/virt-enable-guest-log-default-cli.adoc b/modules/virt-enable-guest-log-default-cli.adoc index 08c730fa6dd8..84219d900538 100644 --- a/modules/virt-enable-guest-log-default-cli.adoc +++ b/modules/virt-enable-guest-log-default-cli.adoc @@ -6,6 +6,7 @@ [id="virt-enable-guest-log-default-cli_{context}"] = Enabling default access to VM guest system logs with the CLI +[role="_abstract"] You can enable default access to VM guest system logs by editing the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-enable-guest-log-default-web.adoc b/modules/virt-enable-guest-log-default-web.adoc index 033d0bd88d29..37706002f31a 100644 --- a/modules/virt-enable-guest-log-default-web.adoc +++ b/modules/virt-enable-guest-log-default-web.adoc @@ -6,6 +6,7 @@ [id="virt-enable-guest-log-default-web_{context}"] = Enabling default access to VM guest system logs with the web console +[role="_abstract"] You can enable default access to VM guest system logs by using the web console. .Procedure diff --git a/modules/virt-enable-vm-action-confirmation-web.adoc b/modules/virt-enable-vm-action-confirmation-web.adoc index 3075a274ffd6..32b62c668516 100644 --- a/modules/virt-enable-vm-action-confirmation-web.adoc +++ b/modules/virt-enable-vm-action-confirmation-web.adoc @@ -7,6 +7,7 @@ = Enabling confirmations of virtual machine actions +[role="_abstract"] The *Stop*, *Restart*, and *Pause* actions can display confirmation dialogs if confirmation is enabled. By default, confirmation is disabled. .Procedure diff --git a/modules/virt-enabling-aaq-operator.adoc b/modules/virt-enabling-aaq-operator.adoc index 080f6481a551..3651891ced92 100644 --- a/modules/virt-enabling-aaq-operator.adoc +++ b/modules/virt-enabling-aaq-operator.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-aaq-operator_{context}"] = Enabling the AAQ Operator +[role="_abstract"] To deploy the AAQ Operator, set the `enableApplicationAwareQuota` field value to `true` in the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-enabling-dedicated-resources.adoc b/modules/virt-enabling-dedicated-resources.adoc index 2c7728c2d641..4d002ee4f9bc 100644 --- a/modules/virt-enabling-dedicated-resources.adoc +++ b/modules/virt-enabling-dedicated-resources.adoc @@ -16,7 +16,8 @@ endif::[] [id="virt-enabling-dedicated-resources_{context}"] = Enabling dedicated resources for a {object} -You enable dedicated resources for a {object} in the *Details* tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources. +[role="_abstract"] +You can enable dedicated resources for a {object} in the *Details* tab. Virtual machines that were created from a Red Hat template can be configured with dedicated resources. .Prerequisites diff --git a/modules/virt-enabling-disabling-downward-metrics-feature-gate-cli.adoc b/modules/virt-enabling-disabling-downward-metrics-feature-gate-cli.adoc index 8a8cc443d334..7e9a6003deab 100644 --- a/modules/virt-enabling-disabling-downward-metrics-feature-gate-cli.adoc +++ b/modules/virt-enabling-disabling-downward-metrics-feature-gate-cli.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-disabling-downward-metrics-feature-gate-cli_{context}"] = Enabling or disabling the downward metrics feature gate from the CLI +[role="_abstract"] To expose downward metrics for a host virtual machine, you can enable the `downwardMetrics` feature gate by using the command line. .Prerequisites diff --git a/modules/virt-enabling-disabling-downward-metrics-feature-gate-yaml.adoc b/modules/virt-enabling-disabling-downward-metrics-feature-gate-yaml.adoc index 8508be3926af..4cb16c70864c 100644 --- a/modules/virt-enabling-disabling-downward-metrics-feature-gate-yaml.adoc +++ b/modules/virt-enabling-disabling-downward-metrics-feature-gate-yaml.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-disabling-downward-metrics-feature-gate-yaml_{context}"] = Enabling or disabling the downward metrics feature gate in a YAML file +[role="_abstract"] To expose downward metrics for a host virtual machine, you can enable the `downwardMetrics` feature gate by editing a YAML file. .Prerequisites diff --git a/modules/virt-enabling-disabling-vm-delete-protection-cli.adoc b/modules/virt-enabling-disabling-vm-delete-protection-cli.adoc index 96db99272280..0b3e1e4aff01 100644 --- a/modules/virt-enabling-disabling-vm-delete-protection-cli.adoc +++ b/modules/virt-enabling-disabling-vm-delete-protection-cli.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-disabling-vm-delete-protection-cli_{context}"] = Enabling or disabling VM delete protection by using the CLI +[role="_abstract"] To prevent the inadvertent deletion of a virtual machine (VM), you can enable VM delete protection by using the command line. You can also disable delete protection for a VM. By default, delete protection is not enabled for VMs. You must set the option for each individual VM. diff --git a/modules/virt-enabling-disabling-vm-delete-protection-web.adoc b/modules/virt-enabling-disabling-vm-delete-protection-web.adoc index 9ad149ef2f14..6906d029772a 100644 --- a/modules/virt-enabling-disabling-vm-delete-protection-web.adoc +++ b/modules/virt-enabling-disabling-vm-delete-protection-web.adoc @@ -7,6 +7,7 @@ = Enabling or disabling virtual machine delete protection by using the web console +[role="_abstract"] To prevent the inadvertent deletion of a virtual machine (VM), you can enable VM delete protection by using the {product-title} web console. You can also disable delete protection for a VM. By default, delete protection is not enabled for VMs. You must set the option for each individual VM. diff --git a/modules/virt-enabling-dynamic-key-injection-cli.adoc b/modules/virt-enabling-dynamic-key-injection-cli.adoc index d53c2ab87b30..8959b9fb4a22 100644 --- a/modules/virt-enabling-dynamic-key-injection-cli.adoc +++ b/modules/virt-enabling-dynamic-key-injection-cli.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-dynamic-key-injection-cli_{context}"] = Enabling dynamic key injection by using the CLI +[role="_abstract"] You can enable dynamic key injection for a virtual machine (VM) by using the command line. Then, you can update the public SSH key at runtime. [NOTE] @@ -22,9 +23,10 @@ The key is added to the VM by the QEMU guest agent, which is installed automatic .Procedure -. Create a manifest file for a `VirtualMachine` object and a `Secret` object: +. Create a manifest file for a `VirtualMachine` object and a `Secret` object. ++ +Example manifest: + -.Example manifest [source,yaml] ---- include::snippets/virt-dynamic-key.yaml[] @@ -55,7 +57,8 @@ $ virtctl start vm example-vm -n example-namespace $ oc describe vm example-vm -n example-namespace ---- + -.Example output +Example output: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-enabling-heterogeneous-clusters.adoc b/modules/virt-enabling-heterogeneous-clusters.adoc index c77b9dc07fcc..32af09677f47 100644 --- a/modules/virt-enabling-heterogeneous-clusters.adoc +++ b/modules/virt-enabling-heterogeneous-clusters.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-heterogeneous-clusters_{context}"] = Enabling heterogeneous cluster support +[role="_abstract"] You can enable golden image support for heterogeneous clusters by setting the `enableMultiArchBootImageImport` feature gate to `true` in the `HyperConverged` custom resource (CR). :FeatureName: Golden image support for heterogeneous clusters diff --git a/modules/virt-enabling-load-balancer-service-web.adoc b/modules/virt-enabling-load-balancer-service-web.adoc index b8240426fd1f..510d8d7e960c 100644 --- a/modules/virt-enabling-load-balancer-service-web.adoc +++ b/modules/virt-enabling-load-balancer-service-web.adoc @@ -7,6 +7,7 @@ [id="virt-enabling-load-balancer-service-web_{context}"] = Enabling load balancer service creation by using the web console +[role="_abstract"] You can enable the creation of load balancer services for a virtual machine (VM) by using the {product-title} web console. .Prerequisites @@ -20,4 +21,4 @@ You can enable the creation of load balancer services for a virtual machine (VM) . Navigate to *Virtualization* -> *Overview*. . On the *Settings* tab, click *Cluster*. . Expand *General settings* and *SSH configuration*. -. Set *SSH over LoadBalancer service* to on. \ No newline at end of file +. Set *SSH over LoadBalancer service* to on. diff --git a/modules/virt-enabling-multi-queue.adoc b/modules/virt-enabling-multi-queue.adoc index 7edf545073bd..89770b02615f 100644 --- a/modules/virt-enabling-multi-queue.adoc +++ b/modules/virt-enabling-multi-queue.adoc @@ -6,7 +6,8 @@ [id="virt-enabling-multi-queue_{context}"] = Enabling multi-queue functionality -Enable multi-queue functionality for interfaces configured with a VirtIO model. +[role="_abstract"] +You can enable multi-queue functionality for interfaces configured with a VirtIO model. .Procedure @@ -22,4 +23,4 @@ spec: networkInterfaceMultiqueue: true ---- -. Save the `VirtualMachine` manifest file to apply your changes. \ No newline at end of file +. Save the `VirtualMachine` manifest file to apply your changes. diff --git a/modules/virt-enabling-persistent-efi.adoc b/modules/virt-enabling-persistent-efi.adoc index 5647cfde574f..b72a54a9e37b 100644 --- a/modules/virt-enabling-persistent-efi.adoc +++ b/modules/virt-enabling-persistent-efi.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-persistent-efi_{context}"] = Enabling persistent EFI +[role="_abstract"] You can enable EFI persistence in a VM by configuring an RWX storage class at the cluster level and adjusting the settings in the EFI section of the VM. .Prerequisites diff --git a/modules/virt-enabling-persistentreservation-feature-gate-cli.adoc b/modules/virt-enabling-persistentreservation-feature-gate-cli.adoc index 5a9cfbaefefe..3a15f88c5110 100644 --- a/modules/virt-enabling-persistentreservation-feature-gate-cli.adoc +++ b/modules/virt-enabling-persistentreservation-feature-gate-cli.adoc @@ -6,7 +6,8 @@ [id="virt-enabling-persistentreservation-feature-gate-cli_{context}"] = Enabling the PersistentReservation feature gate by using the CLI -You enable the `persistentReservation` feature gate by using the command line. Enabling the feature gate requires cluster administrator privileges. +[role="_abstract"] +You can enable the `persistentReservation` feature gate by using the command line. Enabling the feature gate requires cluster administrator privileges. .Prerequisites diff --git a/modules/virt-enabling-persistentreservation-feature-gate-web.adoc b/modules/virt-enabling-persistentreservation-feature-gate-web.adoc index 49c805ba5143..8f7b09017723 100644 --- a/modules/virt-enabling-persistentreservation-feature-gate-web.adoc +++ b/modules/virt-enabling-persistentreservation-feature-gate-web.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-persistentreservation-feature-gate-web_{context}"] = Enabling the PersistentReservation feature gate by using the web console +[role="_abstract"] You must enable the PersistentReservation feature gate to allow a LUN-backed block mode virtual machine (VM) disk to be shared among multiple virtual machines. Enabling the feature gate requires cluster administrator privileges. .Procedure @@ -16,4 +17,4 @@ You must enable the PersistentReservation feature gate to allow a LUN-backed blo . Select *Cluster*. -. Expand *SCSI persistent reservation* and set *Enable persistent reservation* to on. \ No newline at end of file +. Expand *SCSI persistent reservation* and set *Enable persistent reservation* to on. diff --git a/modules/virt-enabling-persistentreservation-feature-gate.adoc b/modules/virt-enabling-persistentreservation-feature-gate.adoc index 38d5e52c9d75..b5c36e14af89 100644 --- a/modules/virt-enabling-persistentreservation-feature-gate.adoc +++ b/modules/virt-enabling-persistentreservation-feature-gate.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-persistentreservation-feature-gate_{context}"] = Enabling the PersistentReservation feature gate +[role="_abstract"] You can enable the SCSI `persistentReservation` feature gate and allow a LUN-backed block mode virtual machine (VM) disk to be shared among multiple virtual machines. The `persistentReservation` feature gate is disabled by default. You can enable the `persistentReservation` feature gate by using the web console or the command line. diff --git a/modules/virt-enabling-preallocation-for-dv.adoc b/modules/virt-enabling-preallocation-for-dv.adoc index 77c46c9024e4..997b0c1538a0 100644 --- a/modules/virt-enabling-preallocation-for-dv.adoc +++ b/modules/virt-enabling-preallocation-for-dv.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-preallocation-for-dv_{context}"] = Enabling preallocation for a data volume +[role="_abstract"] You can enable preallocation for specific data volumes by including the `spec.preallocation` field in the data volume manifest. You can enable preallocation mode in either the web console or by using the OpenShift CLI (`oc`). Preallocation mode is supported for all CDI source types. diff --git a/modules/virt-enabling-usb-host-passthrough.adoc b/modules/virt-enabling-usb-host-passthrough.adoc index 58c5006f49ed..a7f9b818c76b 100644 --- a/modules/virt-enabling-usb-host-passthrough.adoc +++ b/modules/virt-enabling-usb-host-passthrough.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-usb-host-passthrough_{context}"] = Enabling USB host passthrough +[role="_abstract"] To attach a USB device to a virtual machine (VM), you must first enable USB host passthrough at the cluster level. To do this, specify a resource name and USB device name for each device you want first to add and then assign to a VM. You can allocate more than one device, each of which is known as a `selector` in the `HyperConverged` custom resource (CR), to a single resource name. If you have multiple identical USB devices on the cluster, you can choose to allocate a VM to a specific device. @@ -125,4 +126,4 @@ spec: ---- <1> Lists the host devices that have permission to be used in the cluster. <2> Lists the available USB devices. -<3> Uses `resourceName: deviceName` for each device you want to add and assign to the VM. In this example, the resource is bound to three devices, each of which is identified by `vendor` and `product` and is known as a `selector`. \ No newline at end of file +<3> Uses `resourceName: deviceName` for each device you want to add and assign to the VM. In this example, the resource is bound to three devices, each of which is identified by `vendor` and `product` and is known as a `selector`. diff --git a/modules/virt-enabling-vms-ibm-secure-execution-ibm-z.adoc b/modules/virt-enabling-vms-ibm-secure-execution-ibm-z.adoc index 74073fa77d44..c3c897aa6960 100644 --- a/modules/virt-enabling-vms-ibm-secure-execution-ibm-z.adoc +++ b/modules/virt-enabling-vms-ibm-secure-execution-ibm-z.adoc @@ -6,6 +6,7 @@ [id="virt-enabling-vms-ibm-secure-execution-ibm-z_{context}"] = Enabling VMs to run {ibm-title} Secure Execution on {ibm-z-title} and {ibm-linuxone-title} +[role="_abstract"] To enable {ibm-name} Secure Execution virtual machines (VMs) on {ibm-z-name} and {ibm-linuxone-name} on the compute nodes of your cluster, you must ensure that you meet the prerequisites and complete the following steps. .Prerequisites diff --git a/modules/virt-enabling-volume-snapshot-boot-source.adoc b/modules/virt-enabling-volume-snapshot-boot-source.adoc index 9f170abd63e1..f285ec0df15c 100644 --- a/modules/virt-enabling-volume-snapshot-boot-source.adoc +++ b/modules/virt-enabling-volume-snapshot-boot-source.adoc @@ -7,7 +7,10 @@ [id="virt-enabling-volume-snapshot-boot-source_{context}"] = Enabling volume snapshot boot sources -Enable volume snapshot boot sources by setting the parameter in the `StorageProfile` associated with the storage class that stores operating system base images. Although `DataImportCron` was originally designed to maintain only PVC sources, `VolumeSnapshot` sources scale better than PVC sources for certain storage types. +[role="_abstract"] +You can enable volume snapshot boot sources by setting the parameter in the `StorageProfile` associated with the storage class that stores operating system base images. + +Although `DataImportCron` was originally designed to maintain only PVC sources, `VolumeSnapshot` sources scale better than PVC sources for certain storage types. [NOTE] ==== @@ -33,7 +36,8 @@ $ oc edit storageprofile . Edit the storage profile, if needed, by updating the `dataImportCronSourceFormat` specification to `snapshot`. + -.Example storage profile +Example storage profile: ++ [source,yaml] ---- apiVersion: cdi.kubevirt.io/v1beta1 diff --git a/modules/virt-example-bond-nncp.adoc b/modules/virt-example-bond-nncp.adoc index 3726364e062b..c5997d8db38b 100644 --- a/modules/virt-example-bond-nncp.adoc +++ b/modules/virt-example-bond-nncp.adoc @@ -6,15 +6,16 @@ [id="virt-example-bond-nncp_{context}"] = Example: Bond interface node network configuration policy -Create a bond interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster. +[role="_abstract"] +You can create a bond interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster. [NOTE] ==== {VirtProductName} only supports the following bond modes: -* `active-backup` + -* `balance-xor` + -* `802.3ad` + +* `active-backup` +* `balance-xor` +* `802.3ad` Other bond modes are not supported. ==== diff --git a/modules/virt-example-bridge-nncp.adoc b/modules/virt-example-bridge-nncp.adoc index 54d75280461a..865e58a05c8a 100644 --- a/modules/virt-example-bridge-nncp.adoc +++ b/modules/virt-example-bridge-nncp.adoc @@ -6,7 +6,8 @@ [id="virt-example-bridge-nncp_{context}"] = Example: Linux bridge interface node network configuration policy -Create a Linux bridge interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest +[role="_abstract"] +You can create a Linux bridge interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. diff --git a/modules/virt-example-configmap-tls-certificate.adoc b/modules/virt-example-configmap-tls-certificate.adoc index 2dbaa35d1e35..554b729b536b 100644 --- a/modules/virt-example-configmap-tls-certificate.adoc +++ b/modules/virt-example-configmap-tls-certificate.adoc @@ -6,6 +6,7 @@ [id="virt-example-configmap-tls-certificate_{context}"] = Example: Config map created from a TLS certificate +[role="_abstract"] The following example is of a config map created from `ca.pem` TLS certificate. [source,yaml] diff --git a/modules/virt-example-enabling-lldp-policy.adoc b/modules/virt-example-enabling-lldp-policy.adoc index 1915a0be1b64..7119e4f48752 100644 --- a/modules/virt-example-enabling-lldp-policy.adoc +++ b/modules/virt-example-enabling-lldp-policy.adoc @@ -6,7 +6,10 @@ [id="virt-example-enabling-lldp-policy_{context}"] = Example: Node network configuration policy to enable LLDP reporting -The following YAML file is an example of a `NodeNetworkConfigurationPolicy` manifest that enables the Link Layer Discovery Protocol (LLDP) listener for all ethernet ports in your {product-title} cluster. Devices on a local area network can use LLDP to advertise their identity, capabilities, and neighbor information. +[role="_abstract"] +The following YAML file is an example of a `NodeNetworkConfigurationPolicy` manifest that enables the Link Layer Discovery Protocol (LLDP) listener for all ethernet ports in your {product-title} cluster. + +Devices on a local area network can use LLDP to advertise their identity, capabilities, and neighbor information. [source,yaml] @@ -25,4 +28,4 @@ spec: # ... ---- <1> Specifies the name of the node network configuration policy. -<2> Specifies that LLDP is enabled for all ethernet ports that have the interface state set to `up`. \ No newline at end of file +<2> Specifies that LLDP is enabled for all ethernet ports that have the interface state set to `up`. diff --git a/modules/virt-example-ethernet-nncp.adoc b/modules/virt-example-ethernet-nncp.adoc index a7efbcb17b34..a60a436e1ae8 100644 --- a/modules/virt-example-ethernet-nncp.adoc +++ b/modules/virt-example-ethernet-nncp.adoc @@ -6,7 +6,8 @@ [id="virt-example-ethernet-nncp_{context}"] = Example: Ethernet interface node network configuration policy -Configure an Ethernet interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster. +[role="_abstract"] +You can configure an Ethernet interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster. The following YAML file is an example of a manifest for an Ethernet interface. It includes sample values that you must replace with your own information. diff --git a/modules/virt-example-host-vrf.adoc b/modules/virt-example-host-vrf.adoc index d9fc8e81b8ff..be285a21f260 100644 --- a/modules/virt-example-host-vrf.adoc +++ b/modules/virt-example-host-vrf.adoc @@ -6,7 +6,8 @@ [id="virt-example-host-vrf_{context}"] = Example: Network interface with a VRF instance node network configuration policy -Associate a Virtual Routing and Forwarding (VRF) instance with a network interface by applying a `NodeNetworkConfigurationPolicy` custom resource (CR). +[role="_abstract"] +You can associate a Virtual Routing and Forwarding (VRF) instance with a network interface by applying a `NodeNetworkConfigurationPolicy` custom resource (CR). By associating a VRF instance with a network interface, you can support traffic isolation, independent routing decisions, and the logical separation of network resources. diff --git a/modules/virt-example-inherit-static-ip-from-nic.adoc b/modules/virt-example-inherit-static-ip-from-nic.adoc index 0c1174e575f5..077285ceb87d 100644 --- a/modules/virt-example-inherit-static-ip-from-nic.adoc +++ b/modules/virt-example-inherit-static-ip-from-nic.adoc @@ -6,6 +6,7 @@ [id="virt-example-inherit-static-ip-from-nic_{context}"] = Example: Linux bridge interface node network configuration policy to inherit static IP address from the NIC attached to the bridge +[role="_abstract"] Create a Linux bridge interface on nodes in the cluster and transfer the static IP configuration of the NIC to the bridge by applying a single `NodeNetworkConfigurationPolicy` manifest to the cluster. The following YAML file is an example of a manifest for a Linux bridge interface. It includes sample values that you must replace with your own information. diff --git a/modules/virt-example-nmstate-IP-management.adoc b/modules/virt-example-nmstate-IP-management.adoc index bc8c7aab1c2f..d2a1a47c9366 100644 --- a/modules/virt-example-nmstate-IP-management.adoc +++ b/modules/virt-example-nmstate-IP-management.adoc @@ -6,6 +6,7 @@ [id="virt-example-nmstate-IP-management_{context}"] = Examples: IP management +[role="_abstract"] The following example configuration snippets show different methods of IP management. These examples use the `ethernet` interface type to simplify the example while showing the related context in the policy configuration. These IP management examples can be used with the other interface types. @@ -138,7 +139,8 @@ The following example shows a default situation that stores DNS values globally: * Configure a static DNS without a network interface. Note that when updating the `/etc/resolv.conf` file on a host node, you do not need to specify an interface, IPv4 or IPv6, in the `NodeNetworkConfigurationPolicy` (NNCP) manifest. + -.Example of a DNS configuration for a network interface that globally stores DNS values +Example of a DNS configuration for a network interface that globally stores DNS values: ++ [source,yaml] ---- apiVersion: nmstate.io/v1 @@ -192,7 +194,8 @@ The following examples show situations that require configuring a network interf * If you want to rank a static DNS name server over a dynamic DNS name server, define the interface that runs either the Dynamic Host Configuration Protocol (DHCP) or the IPv6 Autoconfiguration (`autoconf`) mechanism in the network interface YAML configuration file. + -.Example configuration that adds `192.0.2.1` to DNS name servers retrieved from the DHCPv4 network protocol +Example configuration that adds `192.0.2.1` to DNS name servers retrieved from the DHCPv4 network protocol: ++ [source,yaml] ---- # ... @@ -218,7 +221,8 @@ interfaces: Storing DNS values at the network interface level might cause name resolution issues after you attach the interface to network components, such as an Open vSwitch (OVS) bridge, a Linux bridge, or a bond. ==== + -.Example configuration that stores DNS values at the interface level +Example configuration that stores DNS values at the interface level: ++ [source,yaml] ---- # ... @@ -261,7 +265,8 @@ Specifying the following `dns-resolver` configurations in the network interface * Specifying domain suffixes for the `search` parameter and not setting IP addresses for the `server` parameter. ==== + -.Example configuration that sets `example.com` and `example.org` static DNS search domains along with static DNS name server settings +Example configuration that sets `example.com` and `example.org` static DNS search domains along with static DNS name server settings: ++ [source,yaml] ---- # ... diff --git a/modules/virt-example-nmstate-multiple-interfaces.adoc b/modules/virt-example-nmstate-multiple-interfaces.adoc index 1845195bf464..a208eca665a3 100644 --- a/modules/virt-example-nmstate-multiple-interfaces.adoc +++ b/modules/virt-example-nmstate-multiple-interfaces.adoc @@ -6,6 +6,7 @@ [id="virt-example-nmstate-multiple-interfaces_{context}"] = Example: Multiple interfaces in the same node network configuration policy +[role="_abstract"] You can create multiple interfaces in the same node network configuration policy. These interfaces can reference each other, allowing you to build and deploy a network configuration by using a single policy manifest. The following example YAML file creates a bond that is named `bond10` across two NICs and VLAN that is named `bond10.103` that connects to the bond. diff --git a/modules/virt-example-vf-host-services.adoc b/modules/virt-example-vf-host-services.adoc index 762a3302935a..218cb24dc163 100644 --- a/modules/virt-example-vf-host-services.adoc +++ b/modules/virt-example-vf-host-services.adoc @@ -6,7 +6,8 @@ [id="virt-example-vf-host-services_{context}"] = Example: Node network configuration policy for virtual functions -Update host network settings for Single Root I/O Virtualization (SR-IOV) network virtual functions (VF) in an existing cluster by applying a `NodeNetworkConfigurationPolicy` manifest. +[role="_abstract"] +You can update host network settings for Single Root I/O Virtualization (SR-IOV) network virtual functions (VF) in an existing cluster by applying a `NodeNetworkConfigurationPolicy` manifest. You can apply a `NodeNetworkConfigurationPolicy` manifest to an existing cluster to complete the following tasks: diff --git a/modules/virt-example-vlan-nncp.adoc b/modules/virt-example-vlan-nncp.adoc index 9f3e64cd0555..52d09df531f9 100644 --- a/modules/virt-example-vlan-nncp.adoc +++ b/modules/virt-example-vlan-nncp.adoc @@ -6,7 +6,8 @@ [id="virt-example-vlan-nncp_{context}"] = Example: VLAN interface node network configuration policy -Create a VLAN interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster. +[role="_abstract"] +You can create a VLAN interface on nodes in the cluster by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster. [NOTE] ==== diff --git a/modules/virt-example-vm-node-placement-node-affinity.adoc b/modules/virt-example-vm-node-placement-node-affinity.adoc index 91ef201e2df7..7d4882766b29 100644 --- a/modules/virt-example-vm-node-placement-node-affinity.adoc +++ b/modules/virt-example-vm-node-placement-node-affinity.adoc @@ -6,11 +6,13 @@ [id="virt-example-vm-node-placement-node-affinity_{context}"] = Example: VM node placement with node affinity +[role="_abstract"] In this example, the VM must be scheduled on a node that has the label `example.io/example-key = example-value-1` or the label `example.io/example-key = example-value-2`. The constraint is met if only one of the labels is present on the node. If neither label is present, the VM is not scheduled. If possible, the scheduler avoids nodes that have the label `example-node-label-key = example-node-label-value`. However, if all candidate nodes have this label, the scheduler ignores this constraint. .Example VM manifest +==== [source,yaml] ---- metadata: @@ -42,3 +44,4 @@ spec: ---- <1> If you use the `requiredDuringSchedulingIgnoredDuringExecution` rule type, the VM is not scheduled if the constraint is not met. <2> If you use the `preferredDuringSchedulingIgnoredDuringExecution` rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. +==== diff --git a/modules/virt-example-vm-node-placement-node-selector.adoc b/modules/virt-example-vm-node-placement-node-selector.adoc index aca177ca30f5..2f426afe720a 100644 --- a/modules/virt-example-vm-node-placement-node-selector.adoc +++ b/modules/virt-example-vm-node-placement-node-selector.adoc @@ -6,6 +6,7 @@ [id="virt-example-vm-node-placement-node-selector_{context}"] = Example: VM node placement with nodeSelector +[role="_abstract"] In this example, the virtual machine requires a node that has metadata containing both `example-key-1 = example-value-1` and `example-key-2 = example-value-2` labels. [WARNING] @@ -14,6 +15,7 @@ If there are no nodes that fit this description, the virtual machine is not sche ==== .Example VM manifest +==== [source,yaml] ---- metadata: @@ -28,3 +30,4 @@ spec: example-key-2: example-value-2 # ... ---- +==== diff --git a/modules/virt-example-vm-node-placement-pod-affinity.adoc b/modules/virt-example-vm-node-placement-pod-affinity.adoc index 911552cb5491..5390b6abd655 100644 --- a/modules/virt-example-vm-node-placement-pod-affinity.adoc +++ b/modules/virt-example-vm-node-placement-pod-affinity.adoc @@ -6,11 +6,13 @@ [id="virt-example-vm-node-placement-pod-affinity_{context}"] = Example: VM node placement with pod affinity and pod anti-affinity +[role="_abstract"] In this example, the VM must be scheduled on a node that has a running pod with the label `example-key-1 = example-value-1`. If there is no such pod running on any node, the VM is not scheduled. If possible, the VM is not scheduled on a node that has any pod with the label `example-key-2 = example-value-2`. However, if all candidate nodes have a pod with this label, the scheduler ignores this constraint. .Example VM manifest +==== [source,yaml] ---- metadata: @@ -45,3 +47,4 @@ spec: ---- <1> If you use the `requiredDuringSchedulingIgnoredDuringExecution` rule type, the VM is not scheduled if the constraint is not met. <2> If you use the `preferredDuringSchedulingIgnoredDuringExecution` rule type, the VM is still scheduled if the constraint is not met, as long as all required constraints are met. +==== diff --git a/modules/virt-example-vm-node-placement-tolerations.adoc b/modules/virt-example-vm-node-placement-tolerations.adoc index 717c099f31eb..0bb0e92133cb 100644 --- a/modules/virt-example-vm-node-placement-tolerations.adoc +++ b/modules/virt-example-vm-node-placement-tolerations.adoc @@ -6,6 +6,7 @@ [id="virt-example-vm-node-placement-tolerations_{context}"] = Example: VM node placement with tolerations +[role="_abstract"] In this example, nodes that are reserved for virtual machines are already labeled with the `key=virtualization:NoSchedule` taint. Because this virtual machine has matching `tolerations`, it can schedule onto the tainted nodes. [NOTE] @@ -14,6 +15,7 @@ A virtual machine that tolerates a taint is not required to schedule onto a node ==== .Example VM manifest +==== [source,yaml] ---- metadata: @@ -28,3 +30,4 @@ spec: effect: "NoSchedule" # ... ---- +==== diff --git a/modules/virt-expanding-storage-with-data-volumes.adoc b/modules/virt-expanding-storage-with-data-volumes.adoc index bde629b2519e..274f0244b3f7 100644 --- a/modules/virt-expanding-storage-with-data-volumes.adoc +++ b/modules/virt-expanding-storage-with-data-volumes.adoc @@ -6,6 +6,7 @@ [id="virt-expanding-storage-with-data-volumes_{context}"] = Expanding available virtual storage by adding blank data volumes +[role="_abstract"] You can expand the available storage of a virtual machine (VM) by adding blank data volumes. .Prerequisites @@ -17,7 +18,6 @@ You can expand the available storage of a virtual machine (VM) by adding blank d . Create a `DataVolume` manifest as shown in the following example: + -.Example `DataVolume` manifest [source,yaml] ---- apiVersion: cdi.kubevirt.io/v1beta1 diff --git a/modules/virt-expanding-vm-disk-pvc.adoc b/modules/virt-expanding-vm-disk-pvc.adoc index 52dd08fc6a72..b9fa16373706 100644 --- a/modules/virt-expanding-vm-disk-pvc.adoc +++ b/modules/virt-expanding-vm-disk-pvc.adoc @@ -6,6 +6,7 @@ [id="virt-expanding-vm-disk-pvc_{context}"] = Increasing a VM disk size by expanding the PVC of the disk +[role="_abstract"] You can increase the size of a virtual machine (VM) disk by expanding the persistent volume claim (PVC) of the disk. To specify the increased PVC volume, you can use the web console with the VM running. Alternatively, you can edit the PVC manifest in the CLI. [NOTE] diff --git a/modules/virt-exposing-pci-device-in-cluster-cli.adoc b/modules/virt-exposing-pci-device-in-cluster-cli.adoc index a41c35d0ebe6..a486040482ca 100644 --- a/modules/virt-exposing-pci-device-in-cluster-cli.adoc +++ b/modules/virt-exposing-pci-device-in-cluster-cli.adoc @@ -6,6 +6,7 @@ [id="virt-exposing-pci-device-in-cluster-cli_{context}"] = Exposing PCI host devices in the cluster using the CLI +[role="_abstract"] To expose PCI host devices in the cluster, add details about the PCI devices to the `spec.permittedHostDevices.pciHostDevices` array of the `HyperConverged` custom resource (CR). .Prerequisites @@ -20,9 +21,10 @@ To expose PCI host devices in the cluster, add details about the PCI devices to $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} ---- -. Add the PCI device information to the `spec.permittedHostDevices.pciHostDevices` array. For example: +. Add the PCI device information to the `spec.permittedHostDevices.pciHostDevices` array. ++ +Example configuration file: + -.Example configuration file [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1 @@ -63,7 +65,8 @@ The above example snippet shows two PCI host devices that are named `nvidia.com/ $ oc describe node ---- + -.Example output +Example output: ++ [source,terminal] ---- Capacity: diff --git a/modules/virt-generalizing-linux-vm-image.adoc b/modules/virt-generalizing-linux-vm-image.adoc index a87130c9a6d9..01b7b9b02746 100644 --- a/modules/virt-generalizing-linux-vm-image.adoc +++ b/modules/virt-generalizing-linux-vm-image.adoc @@ -6,6 +6,7 @@ [id="virt-generalizing-linux-vm-image_{context}"] = Generalizing a VM image +[role="_abstract"] You can generalize a {op-system-base-full} image to remove all system-specific configuration data before you use the image to create a golden image, a preconfigured snapshot of a virtual machine (VM). You can use a golden image to deploy new VMs. You can generalize a {op-system-base} VM by using the `virtctl`, `guestfs`, and `virt-sysprep` tools. @@ -34,7 +35,8 @@ $ virtctl stop $ oc get vm -o jsonpath="{.spec.template.spec.volumes}{'\n'}" ---- + -.Example output +Example output: ++ [source,terminal] ---- [{"dataVolume":{"name":""},"name":"rootdisk"},{"cloudInitNoCloud":{...}] @@ -47,7 +49,8 @@ $ oc get vm -o jsonpath="{.spec.template.spec.volumes}{'\n'}" $ oc get pvc ---- + -.Example output +Example output: ++ [source,terminal] ---- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE @@ -99,4 +102,6 @@ $ virt-sysprep -a disk.img .. Click *Save*. +.Result + The new volume appears in the *Select volume to boot from* list. This is your new golden image. You can use this volume to create new VMs. diff --git a/modules/virt-generalizing-windows-sysprep.adoc b/modules/virt-generalizing-windows-sysprep.adoc index 5d84cc310ae9..f313e7904529 100644 --- a/modules/virt-generalizing-windows-sysprep.adoc +++ b/modules/virt-generalizing-windows-sysprep.adoc @@ -6,6 +6,7 @@ [id="virt-generalizing-windows-sysprep_{context}"] = Generalizing a Windows VM image +[role="_abstract"] You can generalize a Windows operating system image to remove all system-specific configuration data before you use the image to create a new virtual machine (VM). Before generalizing the VM, you must ensure the `sysprep` tool cannot detect an answer file after the unattended Windows installation. @@ -31,4 +32,6 @@ Before generalizing the VM, you must ensure the `sysprep` tool cannot detect an ---- . After the `sysprep` tool completes, the Windows VM shuts down. The disk image of the VM is now available to use as an installation image for Windows VMs. +.Result + You can now specialize the VM. diff --git a/modules/virt-generating-a-vm-memory-dump.adoc b/modules/virt-generating-a-vm-memory-dump.adoc index a535dfa211ac..4629e3643e89 100644 --- a/modules/virt-generating-a-vm-memory-dump.adoc +++ b/modules/virt-generating-a-vm-memory-dump.adoc @@ -5,6 +5,7 @@ [id="virt-generating-a-vm-memory-dump_{context}"] = Generating a VM memory dump +[role="_abstract"] When a virtual machine (VM) terminates unexpectedly, you can use the `virtctl memory-dump` to generate a memory dump command to output a VM memory dump and save it on a persistent volume claim (PVC). Afterwards, you can analyze the memory dump to diagnose and troubleshoot issues on the VM. // You can specify an existing PVC or use the `--create-claim` flag to create a new PVC. @@ -60,4 +61,4 @@ Alternatively, you can inspect the memory dump, for example by using link:https: [source,terminal] ---- $ virtctl memory-dump remove ----- \ No newline at end of file +---- diff --git a/modules/virt-golden-images-namespace-cli.adoc b/modules/virt-golden-images-namespace-cli.adoc index 24d979f69701..6aafb030bd20 100644 --- a/modules/virt-golden-images-namespace-cli.adoc +++ b/modules/virt-golden-images-namespace-cli.adoc @@ -6,6 +6,7 @@ [id="virt-golden-images-namespace-cli_{context}"] = Configuring a custom namespace for golden images by using the CLI +[role="_abstract"] You can configure a custom namespace for golden images in your cluster by setting the `spec.commonBootImageNamespace` field in the `HyperConverged` custom resource (CR). .Prerequisites @@ -23,9 +24,10 @@ You can configure a custom namespace for golden images in your cluster by settin $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} ---- -. Configure the custom namespace by updating the value of the `spec.commonBootImageNamespace` field: +. Configure the custom namespace by updating the value of the `spec.commonBootImageNamespace` field. ++ +Example configuration file: + -.Example configuration file [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1 @@ -39,4 +41,4 @@ spec: ---- <1> The namespace to use for golden images. -. Save your changes and exit the editor. \ No newline at end of file +. Save your changes and exit the editor. diff --git a/modules/virt-golden-images-namespace-web.adoc b/modules/virt-golden-images-namespace-web.adoc index ed5b710f92eb..684f63bc4661 100644 --- a/modules/virt-golden-images-namespace-web.adoc +++ b/modules/virt-golden-images-namespace-web.adoc @@ -6,6 +6,7 @@ [id="virt-golden-images-namespace-web_{context}"] = Configuring a custom namespace for golden images by using the web console +[role="_abstract"] You can configure a custom namespace for golden images in your cluster by using the {product-title} web console. .Procedure @@ -23,4 +24,4 @@ You can configure a custom namespace for golden images in your cluster by using ... Enter a name for your new namespace in the *Name* field of the *Create project* dialog. -... Click *Create*. \ No newline at end of file +... Click *Create*. diff --git a/modules/virt-golden-images.adoc b/modules/virt-golden-images.adoc index 85190daa126a..7dab4ada5918 100644 --- a/modules/virt-golden-images.adoc +++ b/modules/virt-golden-images.adoc @@ -6,6 +6,7 @@ [id="virt-about-golden-images_{context}"] = About golden images +[role="_abstract"] A golden image is a preconfigured snapshot of a virtual machine (VM) that you can use as a resource to deploy new VMs. For example, you can use golden images to provision the same system environment consistently and deploy systems more quickly and efficiently. [id="virt-how-golden-images-work_{context}"] diff --git a/modules/virt-granting-live-migration-permissions.adoc b/modules/virt-granting-live-migration-permissions.adoc index 29848d72e707..9d22501eb954 100644 --- a/modules/virt-granting-live-migration-permissions.adoc +++ b/modules/virt-granting-live-migration-permissions.adoc @@ -6,7 +6,8 @@ [id="virt-granting-live-migration-permissions_{context}"] = Granting live migration permissions -Grant trusted users or groups the ability to create, delete, and update live migration instances. +[role="_abstract"] +You can grant trusted users or groups the ability to create, delete, and update live migration instances. .Prerequisites diff --git a/modules/virt-hot-plugging-bridge-network-interface-cli.adoc b/modules/virt-hot-plugging-bridge-network-interface-cli.adoc index b2828881e8fb..934c5642e4eb 100644 --- a/modules/virt-hot-plugging-bridge-network-interface-cli.adoc +++ b/modules/virt-hot-plugging-bridge-network-interface-cli.adoc @@ -6,7 +6,8 @@ [id="virt-hot-plugging-bridge-network-interface_{context}"] = Hot plugging a secondary network interface by using the CLI -Hot plug a secondary network interface to a virtual machine (VM) while the VM is running. +[role="_abstract"] +You can hot plug a secondary network interface to a virtual machine (VM) while the VM is running. .Prerequisites @@ -18,7 +19,6 @@ Hot plug a secondary network interface to a virtual machine (VM) while the VM is . Use your preferred text editor to edit the `VirtualMachine` manifest, as shown in the following example: + -.Example VM configuration [source,yaml] ---- apiVersion: kubevirt.io/v1 @@ -70,7 +70,8 @@ where: $ oc get VirtualMachineInstanceMigration -w ---- + -.Example output +Example output: ++ [source,terminal] ---- NAME PHASE VMI @@ -89,7 +90,8 @@ kubevirt-migrate-vm-lj62q Succeeded vm-fedora $ oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }" ---- + -.Example output +Example output: ++ [source,json] ---- [ @@ -114,4 +116,4 @@ $ oc get vmi vm-fedora -ojsonpath="{ @.status.interfaces }" } ] ---- -<1> The hot plugged interface appears in the VMI status. \ No newline at end of file +<1> The hot plugged interface appears in the VMI status. diff --git a/modules/virt-hot-plugging-cpu.adoc b/modules/virt-hot-plugging-cpu.adoc index 9a010e093ccc..82c66637a1b3 100644 --- a/modules/virt-hot-plugging-cpu.adoc +++ b/modules/virt-hot-plugging-cpu.adoc @@ -7,6 +7,7 @@ = Hot plugging CPUs on a virtual machine +[role="_abstract"] You can increase or decrease the number of CPU sockets allocated to a virtual machine (VM) without having to restart the VM by using the {product-title} web console. .Procedure @@ -23,7 +24,7 @@ You can hot plug up to three times the default initial number of vCPU sockets of ==== + If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a `RestartRequired` condition is added to the VM. - ++ [NOTE] ==== If a VM has the `spec.template.spec.domain.devices.networkInterfaceMultiQueue` field enabled and CPUs are hot plugged, the following behavior occurs: @@ -31,4 +32,4 @@ If a VM has the `spec.template.spec.domain.devices.networkInterfaceMultiQueue` f * Existing network interfaces that you attach before the CPU hot plug retain their original queue count, even after you add more virtual CPUs (vCPUs). The underlying virtualization technology causes this expected behavior. * To update the queue count of existing interfaces to match the new vCPU configuration, you can restart the VM. A restart is only necessary if the update improves performance. * New VirtIO network interfaces that you hot plugged after the CPU hotplug automatically receive a queue count that matches the updated vCPU configuration. -==== \ No newline at end of file +==== diff --git a/modules/virt-hot-plugging-disk-cli.adoc b/modules/virt-hot-plugging-disk-cli.adoc index f96d031156a1..ec4c50854d00 100644 --- a/modules/virt-hot-plugging-disk-cli.adoc +++ b/modules/virt-hot-plugging-disk-cli.adoc @@ -6,6 +6,7 @@ [id="virt-hot-plugging-disk-cli_{context}"] = Hot plugging and hot unplugging a disk by using the CLI +[role="_abstract"] You can hot plug and hot unplug a disk while a virtual machine (VM) is running by using the command line. The hot plugged disk remains attached to the VM until you unplug it. diff --git a/modules/virt-hot-plugging-disks-ui.adoc b/modules/virt-hot-plugging-disks-ui.adoc index b52db74fe3d2..e6f1ea3843b0 100644 --- a/modules/virt-hot-plugging-disks-ui.adoc +++ b/modules/virt-hot-plugging-disks-ui.adoc @@ -6,6 +6,7 @@ [id="virt-hot-plugging-disks-ui_{context}"] = Hot plugging and hot unplugging a disk by using the web console +[role="_abstract"] You can hot plug a disk by attaching it to a virtual machine (VM) while the VM is running by using the {product-title} web console. The hot plugged disk remains attached to the VM until you unplug it. diff --git a/modules/virt-hot-plugging-memory.adoc b/modules/virt-hot-plugging-memory.adoc index 31c345a396aa..1d0700a08768 100644 --- a/modules/virt-hot-plugging-memory.adoc +++ b/modules/virt-hot-plugging-memory.adoc @@ -7,6 +7,7 @@ = Hot plugging memory on a virtual machine +[role="_abstract"] You can add or remove the amount of memory allocated to a virtual machine (VM) without having to restart the VM by using the {product-title} web console. .Procedure @@ -20,8 +21,9 @@ You can add or remove the amount of memory allocated to a virtual machine (VM) w ==== You can hot plug up to three times the default initial amount of memory of the VM. Exceeding this limit requires a restart. ==== ++ The system applies these changes immediately. If the VM is migratable, a live migration is triggered. If not, or if the changes cannot be live-updated, a `RestartRequired` condition is added to the VM. - ++ [NOTE] ==== Memory hot plugging for virtual machines requires guest operating system support for the `virtio-mem` driver. This support depends on the driver being included and enabled within the guest operating system, not on specific upstream kernel versions. @@ -32,4 +34,4 @@ Supported guest operating systems: * RHEL 8.10 and later (hot-unplug is disabled by default) * Other Linux guests require kernel version 5.16 or later and the `virtio-mem` kernel module * Windows guests require `virtio-mem` driver version 100.95.104.26200 or later -==== \ No newline at end of file +==== diff --git a/modules/virt-hot-unplugging-bridge-network-interface-cli.adoc b/modules/virt-hot-unplugging-bridge-network-interface-cli.adoc index b0a5a0b6630a..8855283e5edb 100644 --- a/modules/virt-hot-unplugging-bridge-network-interface-cli.adoc +++ b/modules/virt-hot-unplugging-bridge-network-interface-cli.adoc @@ -6,6 +6,7 @@ [id="virt-hot-unplugging-bridge-network-interface_{context}"] = Hot unplugging a secondary network interface by using the CLI +[role="_abstract"] You can remove a secondary network interface from a running virtual machine (VM). [NOTE] @@ -24,7 +25,8 @@ Hot unplugging is not supported for Single Root I/O Virtualization (SR-IOV) inte . Using your preferred text editor, edit the `VirtualMachine` manifest file and set the interface state to `absent`. Setting the interface state to `absent` detaches the network interface from the guest, but the interface still exists in the pod. + -.Example VM configuration +Example VM configuration: ++ [source,yaml] ---- apiVersion: kubevirt.io/v1 diff --git a/modules/virt-how-fs-overhead-affects-space-vm-disks.adoc b/modules/virt-how-fs-overhead-affects-space-vm-disks.adoc index fb951e4016fc..8b97aa1b3f1e 100644 --- a/modules/virt-how-fs-overhead-affects-space-vm-disks.adoc +++ b/modules/virt-how-fs-overhead-affects-space-vm-disks.adoc @@ -6,7 +6,10 @@ [id="virt-how-fs-overhead-affects-space-vm-disks_{context}"] = How file system overhead affects space for virtual machine disks -When you add a virtual machine disk to a persistent volume claim (PVC) that uses the `Filesystem` volume mode, you must ensure that there is enough space on the PVC for: +[role="_abstract"] +When you add a virtual machine disk to a persistent volume claim (PVC) that uses the `Filesystem` volume mode, you must ensure that there is enough space on the PVC. + +The space is required for: * The virtual machine disk. * The space reserved for file system overhead, such as metadata diff --git a/modules/virt-image-upload-commands.adoc b/modules/virt-image-upload-commands.adoc index 80464150d785..9597d1a9d6d8 100644 --- a/modules/virt-image-upload-commands.adoc +++ b/modules/virt-image-upload-commands.adoc @@ -5,6 +5,7 @@ [id="image-upload-commands_{context}"] = Image upload commands +[role="_abstract"] You can use the following `virtctl image-upload` commands to upload a VM image to a data volume. .Image upload commands diff --git a/modules/virt-importing-rhel-image-boot-source-web.adoc b/modules/virt-importing-rhel-image-boot-source-web.adoc index e1428795b844..5086ebbf8c60 100644 --- a/modules/virt-importing-rhel-image-boot-source-web.adoc +++ b/modules/virt-importing-rhel-image-boot-source-web.adoc @@ -6,6 +6,7 @@ [id="virt-importing-rhel-image-boot-source-web_{context}"] = Importing a {op-system-base} image as a boot source +[role="_abstract"] You can import a {op-system-base-full} image as a boot source by specifying a URL for the image. .Prerequisites diff --git a/modules/virt-infer-instancetype-preference.adoc b/modules/virt-infer-instancetype-preference.adoc index e9c091ab4c09..f41bc9a23896 100644 --- a/modules/virt-infer-instancetype-preference.adoc +++ b/modules/virt-infer-instancetype-preference.adoc @@ -6,6 +6,7 @@ [id="virt-infer-instancetype-preference_{context}"] = Inferring an instance type or preference +[role="_abstract"] Inferring instance types, preferences, or both is enabled by default, and the `inferFromVolumeFailure` policy of the `inferFromVolume` attribute is set to `Ignore`. When inferring from the boot volume, errors are ignored, and the VM is created with the instance type and preference left unset. However, when flags are applied, the `inferFromVolumeFailure` policy defaults to `Reject`. When inferring from the boot volume, errors result in the rejection of the creation of that VM. diff --git a/modules/virt-inferfromvolume-labels.adoc b/modules/virt-inferfromvolume-labels.adoc index 5004ade33a9d..21c88531c58f 100644 --- a/modules/virt-inferfromvolume-labels.adoc +++ b/modules/virt-inferfromvolume-labels.adoc @@ -6,6 +6,7 @@ [id="inferfromvolume-labels_{context}"] = Setting the inferFromVolume labels +[role="_abstract"] Use the following labels on your PVC, data source, or data volume to instruct the inference mechanism which instance type, preference, or both to use when trying to boot from a volume. * A cluster-wide instance type: `instancetype.kubevirt.io/default-instancetype` label. diff --git a/modules/virt-initiating-vm-migration-cli.adoc b/modules/virt-initiating-vm-migration-cli.adoc index 64d8f7ce0782..2d9d4bbdb89d 100644 --- a/modules/virt-initiating-vm-migration-cli.adoc +++ b/modules/virt-initiating-vm-migration-cli.adoc @@ -6,6 +6,7 @@ [id="virt-initiating-vm-migration-cli_{context}"] = Initiating live migration by using the CLI +[role="_abstract"] You can initiate the live migration of a running virtual machine (VM) by using the command line to create a `VirtualMachineInstanceMigration` object for the VM. .Prerequisites @@ -45,7 +46,8 @@ The `VirtualMachineInstanceMigration` object triggers a live migration of the VM $ oc describe vmi -n ---- + -.Example output +Example output: ++ [source,yaml] ---- # ... diff --git a/modules/virt-initiating-vm-migration-web.adoc b/modules/virt-initiating-vm-migration-web.adoc index e4a244087cb8..1f61011b067a 100644 --- a/modules/virt-initiating-vm-migration-web.adoc +++ b/modules/virt-initiating-vm-migration-web.adoc @@ -6,6 +6,7 @@ [id="virt-initiating-vm-migration-web_{context}"] = Initiating live migration by using the web console +[role="_abstract"] You can live migrate a running virtual machine (VM) to a different node in the cluster by using the {product-title} web console. [NOTE] diff --git a/modules/virt-installing-fusion-access-operator.adoc b/modules/virt-installing-fusion-access-operator.adoc index e0bc653e27f8..eeb5027f0094 100644 --- a/modules/virt-installing-fusion-access-operator.adoc +++ b/modules/virt-installing-fusion-access-operator.adoc @@ -6,7 +6,8 @@ [id="installing-fusion-access-operator_{context}"] = Installing the {FusionSAN} Operator -Install the {FusionSAN} Operator from the software catalog in the {product-title} web console. +[role="_abstract"] +You can install the {FusionSAN} Operator from the software catalog in the {product-title} web console. .Prerequisites diff --git a/modules/virt-installing-qemu-guest-agent-on-linux-vm.adoc b/modules/virt-installing-qemu-guest-agent-on-linux-vm.adoc index 0ba16540ca73..12d0c8c34fbf 100644 --- a/modules/virt-installing-qemu-guest-agent-on-linux-vm.adoc +++ b/modules/virt-installing-qemu-guest-agent-on-linux-vm.adoc @@ -7,9 +7,8 @@ [id="virt-installing-qemu-guest-agent-on-linux-vm_{context}"] = Installing the QEMU guest agent on a Linux VM -The `qemu-guest-agent` is available by default in {op-system-base-full} virtual machines (VMs) - -To create snapshots of a VM in the `Running` state with the highest integrity, install the QEMU guest agent. +[role="_abstract"] +The `qemu-guest-agent` is available by default in {op-system-base-full} virtual machines (VMs). To create snapshots of a VM in the `Running` state with the highest integrity, install the QEMU guest agent. The QEMU guest agent takes a consistent snapshot by attempting to quiesce the VM file system. This ensures that in-flight I/O is written to the disk before the snapshot is taken. If the guest agent is not present, quiescing is not possible and a best-effort snapshot is taken. diff --git a/modules/virt-installing-qemu-guest-agent-on-windows-vm.adoc b/modules/virt-installing-qemu-guest-agent-on-windows-vm.adoc index fbb2f6fe1fef..8a2ecbb2368f 100644 --- a/modules/virt-installing-qemu-guest-agent-on-windows-vm.adoc +++ b/modules/virt-installing-qemu-guest-agent-on-windows-vm.adoc @@ -7,6 +7,7 @@ [id="installing-qemu-guest-agent-on-windows-vm_{context}"] = Installing the QEMU guest agent on a Windows VM +[role="_abstract"] For Windows virtual machines (VMs), the QEMU guest agent is included in the VirtIO drivers. You can install the drivers during a Windows installation or on an existing Windows VM. To create snapshots of a VM in the `Running` state with the highest integrity, install the QEMU guest agent. @@ -30,4 +31,4 @@ The conditions under which a snapshot is taken are reflected in the snapshot ind $ net start ---- -. Verify that the output contains the `QEMU Guest Agent`. \ No newline at end of file +. Verify that the output contains the `QEMU Guest Agent`. diff --git a/modules/virt-installing-virt-operator.adoc b/modules/virt-installing-virt-operator.adoc index 749277479669..00b0623d0cba 100644 --- a/modules/virt-installing-virt-operator.adoc +++ b/modules/virt-installing-virt-operator.adoc @@ -6,6 +6,7 @@ [id="virt-installing-virt-operator_{context}"] = Installing the {VirtProductName} Operator by using the web console +[role="_abstract"] You can deploy the {VirtProductName} Operator by using the {product-title} web console. .Prerequisites diff --git a/modules/virt-installing-virtctl-client-yum.adoc b/modules/virt-installing-virtctl-client-yum.adoc index 882ffc803a05..5fea0c8f6538 100644 --- a/modules/virt-installing-virtctl-client-yum.adoc +++ b/modules/virt-installing-virtctl-client-yum.adoc @@ -6,6 +6,7 @@ [id="virt-installing-virtctl-client-yum_{context}"] = Installing the virtctl RPM on {op-system-base} 8 +[role="_abstract"] You can install the `virtctl` RPM package on {op-system-base-full} 8 by enabling the {VirtProductName} repository and installing the `kubevirt-virtctl` package. .Prerequisites diff --git a/modules/virt-installing-virtctl-client.adoc b/modules/virt-installing-virtctl-client.adoc index 1621f02392eb..b73e818b95a0 100644 --- a/modules/virt-installing-virtctl-client.adoc +++ b/modules/virt-installing-virtctl-client.adoc @@ -6,6 +6,7 @@ [id="virt-installing-virtctl-client_{context}"] = Installing the virtctl binary on {op-system-base} 9, Linux, Windows, or macOS +[role="_abstract"] You can download the `virtctl` binary for your operating system from the {product-title} web console and then install it. .Procedure diff --git a/modules/virt-installing-virtio-drivers-existing-windows.adoc b/modules/virt-installing-virtio-drivers-existing-windows.adoc index 86abb4f259cd..b5462cf9cc33 100644 --- a/modules/virt-installing-virtio-drivers-existing-windows.adoc +++ b/modules/virt-installing-virtio-drivers-existing-windows.adoc @@ -7,6 +7,7 @@ [id="virt-installing-virtio-drivers-existing-windows_{context}"] = Installing VirtIO drivers from a SATA CD drive on an existing Windows VM +[role="_abstract"] You can install the VirtIO drivers from a SATA CD drive on an existing Windows virtual machine (VM). [NOTE] diff --git a/modules/virt-installing-virtio-drivers-installing-windows.adoc b/modules/virt-installing-virtio-drivers-installing-windows.adoc index 06316744a514..ab7e188a3fce 100644 --- a/modules/virt-installing-virtio-drivers-installing-windows.adoc +++ b/modules/virt-installing-virtio-drivers-installing-windows.adoc @@ -7,6 +7,7 @@ [id="virt-installing-virtio-drivers-installing-windows_{context}"] = Installing VirtIO drivers during Windows installation +[role="_abstract"] You can install the VirtIO drivers while installing Windows on a virtual machine (VM). [NOTE] diff --git a/modules/virt-installing-watchdog-agent.adoc b/modules/virt-installing-watchdog-agent.adoc index b8fcdb5c9257..facddca3eb31 100644 --- a/modules/virt-installing-watchdog-agent.adoc +++ b/modules/virt-installing-watchdog-agent.adoc @@ -6,7 +6,8 @@ [id="virt-installing-watchdog-agent_{context}"] = Installing the watchdog agent on the guest -You install the watchdog agent on the guest and start the `watchdog` service. +[role="_abstract"] +You can install the watchdog agent on the guest and start the `watchdog` service. .Procedure diff --git a/modules/virt-jumbo-frames-vm-pod-nw.adoc b/modules/virt-jumbo-frames-vm-pod-nw.adoc index 54f61a8ccba3..9cb89986a826 100644 --- a/modules/virt-jumbo-frames-vm-pod-nw.adoc +++ b/modules/virt-jumbo-frames-vm-pod-nw.adoc @@ -6,6 +6,7 @@ [id="virt-jumbo-frames-vm-pod-nw_{context}"] = About jumbo frames support +[role="_abstract"] When using the OVN-Kubernetes CNI plugin, you can send unfragmented jumbo frame packets between two virtual machines (VMs) that are connected on the default pod network. Jumbo frames have a maximum transmission unit (MTU) value greater than 1500 bytes. The VM automatically gets the MTU value of the cluster network, set by the cluster administrator, in one of the following ways: @@ -17,4 +18,4 @@ The VM automatically gets the MTU value of the cluster network, set by the clust [NOTE] ==== For Windows VMs that do not have a VirtIO driver, you must set the MTU manually by using `netsh` or a similar tool. This is because the Windows DHCP client does not read the MTU value. -==== \ No newline at end of file +==== diff --git a/modules/virt-latency-checkup-web-console.adoc b/modules/virt-latency-checkup-web-console.adoc index 76c389e25f34..c42f5c2dcc7e 100644 --- a/modules/virt-latency-checkup-web-console.adoc +++ b/modules/virt-latency-checkup-web-console.adoc @@ -6,6 +6,7 @@ [id="virt-latency-checkup-web-console_{context}"] = Running a latency checkup by using the web console +[role="_abstract"] Run a latency checkup to verify network connectivity and measure the latency between two virtual machines attached to a secondary network interface. .Prerequisites diff --git a/modules/virt-launching-ibm-secure-execution-vm-ibm-z.adoc b/modules/virt-launching-ibm-secure-execution-vm-ibm-z.adoc index 1117427276b7..7a3bd0734b3b 100644 --- a/modules/virt-launching-ibm-secure-execution-vm-ibm-z.adoc +++ b/modules/virt-launching-ibm-secure-execution-vm-ibm-z.adoc @@ -6,6 +6,7 @@ [id="virt-launching-ibm-secure-execution-vm-ibm-z_{context}"] = Launching an {ibm-title} Secure Execution VM on {ibm-z-title} and {ibm-linuxone-title} +[role="_abstract"] Before launching an {ibm-name} Secure Execution VM on {ibm-z-name} and {ibm-linuxone-name}, you must add the `launchSecurity` parameter to the VM manifest. Otherwise, the VM does not boot correctly because it does not have access to the devices. .Procedure diff --git a/modules/virt-linux-bridge-nad-port-isolation.adoc b/modules/virt-linux-bridge-nad-port-isolation.adoc index e006aa39774d..98bced2dd14e 100644 --- a/modules/virt-linux-bridge-nad-port-isolation.adoc +++ b/modules/virt-linux-bridge-nad-port-isolation.adoc @@ -6,7 +6,10 @@ [id="virt-linux-bridge-nad-port-isolation_{context}"] = Enabling port isolation for a Linux bridge NAD -You can enable port isolation for a Linux bridge network attachment definition (NAD) so that virtual machines (VMs) or pods that run on the same virtual LAN (VLAN) can operate in isolation from one another. The Linux bridge NAD creates a virtual bridge, or _virtual switch_, between network interfaces and the physical network. +[role="_abstract"] +You can enable port isolation for a Linux bridge network attachment definition (NAD) so that virtual machines (VMs) or pods that run on the same virtual LAN (VLAN) can operate in isolation from one another. + +The Linux bridge NAD creates a virtual bridge, or _virtual switch_, between network interfaces and the physical network. Isolating ports in this way can provide enhanced security for VM workloads that run on the same node. diff --git a/modules/virt-listing-vmis-cli.adoc b/modules/virt-listing-vmis-cli.adoc index 9919abdb50bd..dcbdb757468c 100644 --- a/modules/virt-listing-vmis-cli.adoc +++ b/modules/virt-listing-vmis-cli.adoc @@ -8,6 +8,7 @@ [id="virt-listing-vmis-cli_{context}"] = Listing all virtual machine instances using the CLI +[role="_abstract"] You can list all virtual machine instances (VMIs) in your cluster, including standalone VMIs and those owned by virtual machines, by using the `oc` command-line interface (CLI). .Prerequisites diff --git a/modules/virt-listing-vmis-web.adoc b/modules/virt-listing-vmis-web.adoc index cd56929be1bf..535bbb29586a 100644 --- a/modules/virt-listing-vmis-web.adoc +++ b/modules/virt-listing-vmis-web.adoc @@ -7,6 +7,7 @@ [id="virt-listing-vmis-web_{context}"] = Listing standalone virtual machine instances using the web console +[role="_abstract"] Using the web console, you can list and view standalone virtual machine instances (VMIs) in your cluster that are not owned by virtual machines (VMs). [NOTE] diff --git a/modules/virt-listing-vms-cli.adoc b/modules/virt-listing-vms-cli.adoc index 79b8bb5452dc..59bedd9b4871 100644 --- a/modules/virt-listing-vms-cli.adoc +++ b/modules/virt-listing-vms-cli.adoc @@ -7,6 +7,7 @@ [id="virt-listing-vms-cli_{context}"] = Listing virtual machines by using the CLI +[role="_abstract"] You can either list all of the virtual machines (VMs) in your cluster or limit the list to VMs in a specified namespace by using the {oc-first}. .Prerequisites diff --git a/modules/virt-listing-vms-web.adoc b/modules/virt-listing-vms-web.adoc index f5cb1eceaa40..be9d0627ff1e 100644 --- a/modules/virt-listing-vms-web.adoc +++ b/modules/virt-listing-vms-web.adoc @@ -7,6 +7,7 @@ [id="virt-listing-vms-web_{context}"] = Listing virtual machines by using the web console +[role="_abstract"] You can list all of the virtual machines (VMs) in your cluster by using the web console. .Procedure diff --git a/modules/virt-live-migration-metrics.adoc b/modules/virt-live-migration-metrics.adoc index d877662b5fa5..be0cb2a338e6 100644 --- a/modules/virt-live-migration-metrics.adoc +++ b/modules/virt-live-migration-metrics.adoc @@ -6,7 +6,8 @@ [id="virt-live-migration-metrics_{context}"] = Live migration metrics -The following metrics can be queried to show live migration status: +[role="_abstract"] +The following metrics can be queried to show live migration status. `kubevirt_vmi_migration_data_processed_bytes`:: The amount of guest operating system data that has migrated to the new virtual machine (VM). Type: Gauge. diff --git a/modules/virt-loki-log-queries.adoc b/modules/virt-loki-log-queries.adoc index e18b78e48c9e..ffb94d447d72 100644 --- a/modules/virt-loki-log-queries.adoc +++ b/modules/virt-loki-log-queries.adoc @@ -2,9 +2,11 @@ // // * virt/support/virt-troubleshooting.adoc +:_mod-docs-content-type: REFERENCE [id="virt-loki-log-queries_{context}"] = {VirtProductName} LogQL queries +[role="_abstract"] You can view and filter aggregated logs for {VirtProductName} components by running Loki Query Language (LogQL) queries on the *Observe* -> *Logs* page in the web console. The default log type is _infrastructure_. The `virt-launcher` log type is _application_. @@ -110,9 +112,11 @@ You can filter log lines to include or exclude strings or regular expressions by |==== .Example line filter expression +==== [source,text] ---- {log_type=~".+"}|json |kubernetes_labels_app_kubernetes_io_part_of="hyperconverged-cluster" |= "error" != "timeout" ----- \ No newline at end of file +---- +==== diff --git a/modules/virt-maintaining-bare-metal-nodes.adoc b/modules/virt-maintaining-bare-metal-nodes.adoc index ec034bd72c1c..612e319ab23d 100644 --- a/modules/virt-maintaining-bare-metal-nodes.adoc +++ b/modules/virt-maintaining-bare-metal-nodes.adoc @@ -6,7 +6,10 @@ [id="virt-maintaining-bare-metal-nodes_{context}"] = Maintaining bare metal nodes -When you deploy {product-title} on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks. +[role="_abstract"] +When you deploy {product-title} on bare metal infrastructure, there are additional considerations that must be taken into account compared to deploying on cloud infrastructure. + +Unlike in cloud environments where the cluster nodes are considered ephemeral, re-provisioning a bare metal node requires significantly more time and effort for maintenance tasks. When a bare metal node fails, for example, if a fatal kernel error happens or a NIC card hardware failure occurs, workloads on the failed node need to be restarted elsewhere else on the cluster while the problem node is repaired or replaced. Node maintenance mode allows cluster administrators to gracefully power down nodes, moving workloads to other parts of the cluster and ensuring workloads do not get interrupted. Detailed progress and node status details are provided during maintenance. diff --git a/modules/virt-managing-auto-update-all-system-boot-sources.adoc b/modules/virt-managing-auto-update-all-system-boot-sources.adoc index 0c1f83bdce7d..b4e3dfe529bc 100644 --- a/modules/virt-managing-auto-update-all-system-boot-sources.adoc +++ b/modules/virt-managing-auto-update-all-system-boot-sources.adoc @@ -7,6 +7,7 @@ [id="virt-managing-auto-update-all-system-boot-sources_{context}"] = Managing automatic updates for all system-defined boot sources +[role="_abstract"] Disabling automatic boot source imports and updates can lower resource usage. In disconnected environments, disabling automatic boot source updates prevents `CDIDataImportCronOutdated` alerts from filling up logs. To disable automatic updates for all system-defined boot sources, set the `enableCommonBootImageImport` field value to `false`. Setting this value to `true` turns automatic updates back on. diff --git a/modules/virt-managing-kubemacpool-cli.adoc b/modules/virt-managing-kubemacpool-cli.adoc index e1c553449e61..aebeaf6494d6 100644 --- a/modules/virt-managing-kubemacpool-cli.adoc +++ b/modules/virt-managing-kubemacpool-cli.adoc @@ -6,6 +6,7 @@ [id="virt-managing-kubemacpool-cli_{context}"] = Managing KubeMacPool by using the CLI +[role="_abstract"] You can disable and re-enable KubeMacPool by using the command line. KubeMacPool is enabled by default. diff --git a/modules/virt-manual-approval-strategy.adoc b/modules/virt-manual-approval-strategy.adoc index c871e55e6b05..4db7bddf8be6 100644 --- a/modules/virt-manual-approval-strategy.adoc +++ b/modules/virt-manual-approval-strategy.adoc @@ -6,6 +6,7 @@ [id="virt-manual-approval-strategy_{context}"] = Manual approval strategy -If you use the *Manual* approval strategy, you must manually approve every pending update. If {product-title} and {VirtProductName} updates are out of sync, your cluster becomes unsupported. To avoid risking the supportability and functionality of your cluster, use the *Automatic* approval strategy. +[role="_abstract"] +If you use the *Manual* approval strategy, you must manually approve every pending update. If {product-title} and {VirtProductName} updates are out of sync, your cluster becomes unsupported. -If you must use the *Manual* approval strategy, maintain a supportable cluster by approving pending Operator updates as soon as they become available. \ No newline at end of file +To avoid risking the supportability and functionality of your cluster, use the *Automatic* approval strategy. If you must use the *Manual* approval strategy, maintain a supportable cluster by approving pending Operator updates as soon as they become available. diff --git a/modules/virt-measuring-latency-vm-secondary-network.adoc b/modules/virt-measuring-latency-vm-secondary-network.adoc index 06fb7b683037..a05ee31ee4cc 100644 --- a/modules/virt-measuring-latency-vm-secondary-network.adoc +++ b/modules/virt-measuring-latency-vm-secondary-network.adoc @@ -6,7 +6,10 @@ [id="virt-measuring-latency-vm-secondary-network_{context}"] = Running a latency checkup by using the CLI -You run a latency checkup using the CLI by performing the following steps: +[role="_abstract"] +You can run a latency checkup by using the CLI. + +Perform the following steps: . Create a service account, roles, and rolebindings to provide cluster access permissions to the latency checkup. . Create a config map to provide the input to run the checkup and to store the results. @@ -23,11 +26,11 @@ You run a latency checkup using the CLI by performing the following steps: .Procedure -. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest for the latency checkup: +. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest for the latency checkup. ++ +Example role manifest file: + -.Example role manifest file [%collapsible] -==== [source,yaml] ---- --- @@ -84,7 +87,6 @@ roleRef: name: kiagnose-configmap-access apiGroup: rbac.authorization.k8s.io ---- -==== . Apply the `ServiceAccount`, `Role`, and `RoleBinding` manifest: + @@ -94,9 +96,10 @@ $ oc apply -n -f .yaml <1> ---- <1> `` is the namespace where the checkup is to be run. This must be an existing namespace where the `NetworkAttachmentDefinition` object resides. -. Create a `ConfigMap` manifest that contains the input parameters for the checkup: +. Create a `ConfigMap` manifest that contains the input parameters for the checkup. ++ +Example input config map: + -.Example input config map [source,yaml] ---- apiVersion: v1 @@ -127,9 +130,10 @@ data: $ oc apply -n -f .yaml ---- -. Create a `Job` manifest to run the checkup: +. Create a `Job` manifest to run the checkup. ++ +Example job manifest: + -.Example job manifest [source,yaml,subs="attributes+"] ---- apiVersion: batch/v1 @@ -186,7 +190,8 @@ $ oc wait job kubevirt-vm-latency-checkup -n --for condition= $ oc get configmap kubevirt-vm-latency-checkup-config -n -o yaml ---- + -.Example output config map (success) +Example output config map (success): ++ [source,yaml] ---- apiVersion: v1 diff --git a/modules/virt-metro-dr-odf.adoc b/modules/virt-metro-dr-odf.adoc index 1276c6dce20f..97419f115342 100644 --- a/modules/virt-metro-dr-odf.adoc +++ b/modules/virt-metro-dr-odf.adoc @@ -6,9 +6,11 @@ [id="metro-dr-odf_{context}"] = Metro-DR for {rh-storage-first} +[role="_abstract"] {VirtProductName} supports the link:https://access.redhat.com/documentation/en-us/red_hat_openshift_data_foundation/latest/html-single/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/index#metro-dr-solution[Metro-DR solution for {rh-storage}], which provides two-way synchronous data replication between managed {VirtProductName} clusters installed on primary and secondary sites. -.Metro-DR differences +== Metro-DR differences + * This synchronous solution is only available to metropolitan distance data centers with a network round-trip latency of 10 milliseconds or less. * Multiple disk VMs are supported. * To prevent data corruption, you must ensure that storage is fenced during failover. @@ -18,4 +20,4 @@ Fencing means isolating a node so that workloads do not run on it. ==== -For more information about using the Metro-DR solution for {rh-storage} with {VirtProductName}, see {ibm-title}'s {rh-storage} Metro-DR documentation. \ No newline at end of file +For more information about using the Metro-DR solution for {rh-storage} with {VirtProductName}, see {ibm-title}'s {rh-storage} Metro-DR documentation. diff --git a/modules/virt-migrate-vm-to-labeled-node.adoc b/modules/virt-migrate-vm-to-labeled-node.adoc index 5c0fddf1a7a5..670ac7985eee 100644 --- a/modules/virt-migrate-vm-to-labeled-node.adoc +++ b/modules/virt-migrate-vm-to-labeled-node.adoc @@ -6,7 +6,10 @@ [id="virt-migrate-vm-to-labeled-node_{context}"] = Migrating a VM to a specific node -You can migrate a running virtual machine (VM) to a specific subset of nodes by using the `addedNodeSelector` field on the `VirtualMachineInstanceMigration` object. This field lets you apply additional node selection rules for a *one-time* migration attempt, without affecting the VM configuration or future migrations. +[role="_abstract"] +You can migrate a running virtual machine (VM) to a specific subset of nodes by using the `addedNodeSelector` field on the `VirtualMachineInstanceMigration` object. + +The `addedNodeSelector` field lets you apply additional node selection rules for a *one-time* migration attempt, without affecting the VM configuration or future migrations. .Prerequisites @@ -45,4 +48,4 @@ where: $ oc apply -f .yaml ---- + -If no nodes satisfy the constraints, the migration is declared a failure after a timeout. The VM remains unaffected. \ No newline at end of file +If no nodes satisfy the constraints, the migration is declared a failure after a timeout. The VM remains unaffected. diff --git a/modules/virt-migrating-bulk-vms-different-storage-class-web.adoc b/modules/virt-migrating-bulk-vms-different-storage-class-web.adoc index d5307c503848..8745519628a2 100644 --- a/modules/virt-migrating-bulk-vms-different-storage-class-web.adoc +++ b/modules/virt-migrating-bulk-vms-different-storage-class-web.adoc @@ -6,6 +6,7 @@ [id="virt-migrating-bulk-vms-different-storage-class-web_{context}"] = Migrating VMs in a single cluster to a different storage class by using the web console +[role="_abstract"] By using the {product-title} web console, you can migrate single-cluster VMs in bulk from one storage class to another storage class. .Prerequisites @@ -39,4 +40,4 @@ You can also click *VirtualMachine name* to select all VMs. . Review the details, and click *Migrate VirtualMachine storage* to start the migration. -. Optional: Click *Stop* to interrupt the migration, or click *View storage migrations* to see the status of current and previous migrations. \ No newline at end of file +. Optional: Click *Stop* to interrupt the migration, or click *View storage migrations* to see the status of current and previous migrations. diff --git a/modules/virt-migrating-storage-class-ui.adoc b/modules/virt-migrating-storage-class-ui.adoc index a17031e7575c..c58480f4a4ba 100644 --- a/modules/virt-migrating-storage-class-ui.adoc +++ b/modules/virt-migrating-storage-class-ui.adoc @@ -7,6 +7,7 @@ [id="virt-migrating-storage-class-ui_{context}"] = Migrating VM disks to a different storage class by using the web console +[role="_abstract"] You can migrate one or more disks attached to a virtual machine (VM) to a different storage class by using the {product-title} web console. When performing this action on a running VM, the operation of the VM is not interrupted and the data on the migrated disks remains accessible. [NOTE] diff --git a/modules/virt-mod-golden-image-heterogeneous-clusters.adoc b/modules/virt-mod-golden-image-heterogeneous-clusters.adoc index 8078c0193c7c..d0d808bc90ac 100644 --- a/modules/virt-mod-golden-image-heterogeneous-clusters.adoc +++ b/modules/virt-mod-golden-image-heterogeneous-clusters.adoc @@ -6,6 +6,7 @@ [id="virt-mod-golden-image-heterogeneous-clusters_{context}"] = Modifying a common golden image source in a heterogeneous cluster +[role="_abstract"] You can modify the image source of a common golden image in a heterogeneous cluster by specifying the supported architectures in the `ssp.kubevirt.io/dict.architectures` annotation in the `HyperConverged` custom resource (CR). :FeatureName: Golden image support for heterogeneous clusters @@ -47,4 +48,4 @@ spec: ---- <1> The comma-separated list of supported architectures for this image. For example, if the image supports `amd64` and `arm64` architectures, the value would be `"amd64,arm64"`. -. Save and exit the editor to update the `HyperConverged` CR. \ No newline at end of file +. Save and exit the editor to update the `HyperConverged` CR. diff --git a/modules/virt-modify-workload-node-heterogeneous-cluster.adoc b/modules/virt-modify-workload-node-heterogeneous-cluster.adoc index e48e46b5d5f2..d537711cb3bb 100644 --- a/modules/virt-modify-workload-node-heterogeneous-cluster.adoc +++ b/modules/virt-modify-workload-node-heterogeneous-cluster.adoc @@ -10,6 +10,7 @@ :FeatureName: Golden image support for heterogeneous clusters include::snippets/technology-preview.adoc[] +[role="_abstract"] If you have a heterogeneous cluster but not want to enable multiple archiecture support, you can modify the workloads node placement in the `HyperConverged` custom resource (CR) to only include nodes with a specific architecture. .Prerequisites diff --git a/modules/virt-monitor-node-network-config-console.adoc b/modules/virt-monitor-node-network-config-console.adoc index 98c3f89077d3..0e50b0f1e1e5 100644 --- a/modules/virt-monitor-node-network-config-console.adoc +++ b/modules/virt-monitor-node-network-config-console.adoc @@ -2,6 +2,7 @@ [id="virt-monitor-node-network-config-console_{context}"] = Monitoring the policy status +[role="_abstract"] You can monitor the policy status from the *NodeNetworkConfigurationPolicy* page. This page displays all the policies created in the cluster in a tabular format, with the following columns: Name:: The name of the policy created. @@ -10,4 +11,4 @@ Matched nodes:: The count of nodes where the policies are applied. This could be Node network state:: The enactment state of the matched nodes. You can click on the enactment state and view detailed information on the status. -To find the desired policy, you can filter the list either based on enactment state by using the *Filter* option, or by using the search option. \ No newline at end of file +To find the desired policy, you can filter the list either based on enactment state by using the *Filter* option, or by using the search option. diff --git a/modules/virt-monitoring-upgrade-status.adoc b/modules/virt-monitoring-upgrade-status.adoc index fa9ed0eafb1e..10faa5a4820f 100644 --- a/modules/virt-monitoring-upgrade-status.adoc +++ b/modules/virt-monitoring-upgrade-status.adoc @@ -6,6 +6,7 @@ [id="virt-monitoring-upgrade-status_{context}"] = Monitoring update status +[role="_abstract"] To monitor the status of a {VirtProductName} Operator update, watch the cluster service version (CSV) `PHASE`. You can also monitor the CSV conditions in the web console or by running the command provided here. [NOTE] @@ -30,7 +31,6 @@ $ oc get csv -n {CNVNamespace} . Review the output, checking the `PHASE` field. For example: + -.Example output [source,terminal,subs="attributes+"] ---- VERSION REPLACES PHASE @@ -49,7 +49,6 @@ $ oc get hyperconverged kubevirt-hyperconverged -n {CNVNamespace} \ + A successful upgrade results in the following output: + -.Example output [source,terminal] ---- ReconcileComplete True Reconcile completed successfully diff --git a/modules/virt-mounting-windows-driver-disk-on-vm.adoc b/modules/virt-mounting-windows-driver-disk-on-vm.adoc index 5288d2f8d2c9..3d436fcdfc21 100644 --- a/modules/virt-mounting-windows-driver-disk-on-vm.adoc +++ b/modules/virt-mounting-windows-driver-disk-on-vm.adoc @@ -7,6 +7,7 @@ = Mounting a Windows driver disk on a virtual machine +[role="_abstract"] You can mount a Windows driver disk on a virtual machine (VM) by using the {product-title} web console. .Procedure diff --git a/modules/virt-must-gather-options.adoc b/modules/virt-must-gather-options.adoc index 04186e3a3670..642a43c1985e 100644 --- a/modules/virt-must-gather-options.adoc +++ b/modules/virt-must-gather-options.adoc @@ -6,7 +6,10 @@ [id="virt-must-gather-options_{context}"] = must-gather tool options -You can run the `oc adm must-gather` command to collect `must gather` images for all the Operators and products deployed on your cluster without the need to explicitly specify the required images. Alternatively, you can specify a combination of scripts and environment variables for the following options: +[role="_abstract"] +You can run the `oc adm must-gather` command to collect `must gather` images for all the Operators and products deployed on your cluster without the need to explicitly specify the required images. + +Alternatively, you can specify a combination of scripts and environment variables for the following options: * Collecting detailed virtual machine (VM) information from a namespace * Collecting detailed information about specified VMs @@ -16,34 +19,32 @@ You can run the `oc adm must-gather` command to collect `must gather` images for [id="parameters"] == Parameters -.Environment variables - +Environment variables:: ++ You can specify environment variables for a compatible script. -`NS=`:: Collect virtual machine information, including `virt-launcher` pod details, from the namespace that you specify. The `VirtualMachine` and `VirtualMachineInstance` CR data is collected for all namespaces. - -`VM=`:: Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the `NS` environment variable. +`NS=`::: Collect virtual machine information, including `virt-launcher` pod details, from the namespace that you specify. The `VirtualMachine` and `VirtualMachineInstance` CR data is collected for all namespaces. -`PROS=`:: Modify the maximum number of parallel processes that the `must-gather` tool uses. The default value is `5`. +`VM=`::: Collect details about a particular virtual machine. To use this option, you must also specify a namespace by using the `NS` environment variable. +`PROS=`::: Modify the maximum number of parallel processes that the `must-gather` tool uses. The default value is `5`. + [IMPORTANT] ==== Using too many parallel processes can cause performance issues. Increasing the maximum number of parallel processes is not recommended. ==== - -.Scripts - +Scripts:: ++ Each script is compatible only with certain environment variable combinations. -`/usr/bin/gather`:: Use the default `must-gather` script, which collects cluster data from all namespaces and includes only basic VM information. This script is compatible only with the `PROS` variable. +`/usr/bin/gather`::: Use the default `must-gather` script, which collects cluster data from all namespaces and includes only basic VM information. This script is compatible only with the `PROS` variable. -`/usr/bin/gather --vms_details`:: Collect VM log files, VM definitions, control-plane logs, and namespaces that belong to {VirtProductName} resources. Specifying namespaces includes their child objects. If you use this parameter without specifying a namespace or VM, the `must-gather` tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use the `VM` variable. +`/usr/bin/gather --vms_details`::: Collect VM log files, VM definitions, control-plane logs, and namespaces that belong to {VirtProductName} resources. Specifying namespaces includes their child objects. If you use this parameter without specifying a namespace or VM, the `must-gather` tool collects this data for all VMs in the cluster. This script is compatible with all environment variables, but you must specify a namespace if you use the `VM` variable. -`/usr/bin/gather --images`:: Collect image, image-stream, and image-stream-tags custom resource information. This script is compatible only with the `PROS` variable. +`/usr/bin/gather --images`::: Collect image, image-stream, and image-stream-tags custom resource information. This script is compatible only with the `PROS` variable. -`/usr/bin/gather --instancetypes`:: Collect instance types information. This information is not currently collected by default; you can, however, optionally collect it. +`/usr/bin/gather --instancetypes`::: Collect instance types information. This information is not currently collected by default; you can, however, optionally collect it. [id="usage-and-examples_{context}"] == Usage and examples @@ -70,17 +71,17 @@ Environment variables are optional. You can run a script by itself or with one o -.Syntax - +Syntax:: ++ To collect `must-gather` logs for all Operators and products on your cluster in a single pass, run the following command: - ++ [source,terminal,subs="attributes+"] ---- $ oc adm must-gather --all-images ---- - ++ If you need to pass additional parameters to individual `must-gather` images, use the following command: - ++ [source,terminal,subs="attributes+"] ---- $ oc adm must-gather \ @@ -88,10 +89,10 @@ $ oc adm must-gather \ -- ---- -.Default data collection parallel processes - +Default data collection parallel processes:: ++ By default, five processes run in parallel. - ++ [source,terminal,subs="attributes+"] ---- $ oc adm must-gather \ @@ -101,10 +102,10 @@ $ oc adm must-gather \ <1> You can modify the number of parallel processes by changing the default. -.Detailed VM information - +Detailed VM information:: ++ The following command collects detailed VM information for the `my-vm` VM in the `mynamespace` namespace: - ++ [source,terminal,subs="attributes+"] ---- $ oc adm must-gather \ @@ -114,10 +115,10 @@ $ oc adm must-gather \ <1> The `NS` environment variable is mandatory if you use the `VM` environment variable. -.Image, image-stream, and image-stream-tags information - +Image, image-stream, and image-stream-tags information:: ++ The following command collects image, image-stream, and image-stream-tags information from the cluster: - ++ [source,terminal,subs="attributes+"] ---- $ oc adm must-gather \ @@ -125,13 +126,13 @@ $ oc adm must-gather \ /usr/bin/gather --images ---- -.Instance types information - +Instance types information:: ++ The following command collects instance types information from the cluster: - ++ [source,terminal,subs="attributes+"] ---- $ oc adm must-gather \ --image=registry.redhat.io/container-native-virtualization/cnv-must-gather-rhel9:v{HCOVersion} \ /usr/bin/gather --instancetypes ----- \ No newline at end of file +---- diff --git a/modules/virt-networking-glossary.adoc b/modules/virt-networking-glossary.adoc index a32d35c4ddc1..1e3374efc841 100644 --- a/modules/virt-networking-glossary.adoc +++ b/modules/virt-networking-glossary.adoc @@ -7,7 +7,8 @@ [id="virt-networking-glossary_{context}"] = {VirtProductName} networking glossary -The following terms are used throughout {VirtProductName} documentation: +[role="_abstract"] +The following terms are used throughout {VirtProductName} documentation. Container Network Interface (CNI):: A link:https://www.cncf.io/[Cloud Native Computing Foundation] project, focused on container network connectivity. @@ -27,4 +28,4 @@ ClusterUserDefinedNetwork (CUDN):: A cluster-scoped CRD introduced by the user-d ifndef::openshift-rosa,openshift-dedicated[] Node network configuration policy (NNCP):: A CRD introduced by the nmstate project, describing the requested network configuration on nodes. You update the node network configuration, including adding and removing interfaces, by applying a `NodeNetworkConfigurationPolicy` manifest to the cluster. -endif::openshift-rosa,openshift-dedicated[] \ No newline at end of file +endif::openshift-rosa,openshift-dedicated[] diff --git a/modules/virt-nmstate-example-policy-configurations.adoc b/modules/virt-nmstate-example-policy-configurations.adoc index 7e4cb21a4641..4b802b28083e 100644 --- a/modules/virt-nmstate-example-policy-configurations.adoc +++ b/modules/virt-nmstate-example-policy-configurations.adoc @@ -6,7 +6,8 @@ [id="virt-nmstate-example-policy-configurations_{context}"] = Example policy configurations for different interfaces -Before you read the different example `NodeNetworkConfigurationPolicy` (NNCP) manifest configurations, consider the following factors when you apply a policy to nodes so that your cluster runs under its best performance conditions: +[role="_abstract"] +Before you read the different example `NodeNetworkConfigurationPolicy` (NNCP) manifest configurations, consider the following factors when you apply a policy to nodes so that your cluster runs under its best performance conditions. * If you want to apply multiple NNCP CRs to a node, you must create the NNCPs in a logical order that is based on the alphanumeric sorting of the policy names. The Kubernetes NMState Operator continuously checks for a newly created NNCP CR so that the Operator can instantly apply the CR to node. diff --git a/modules/virt-node-network-config-console.adoc b/modules/virt-node-network-config-console.adoc index a36f599fedfc..f3659963ea28 100644 --- a/modules/virt-node-network-config-console.adoc +++ b/modules/virt-node-network-config-console.adoc @@ -1,5 +1,8 @@ :_mod-docs-content-type: CONCEPT [id="virt-node-network-config-console_{context}"] = Managing policy from the web console + +[role="_abstract"] You can update the node network configuration, such as adding or removing interfaces from nodes, by applying `NodeNetworkConfigurationPolicy` manifests to the cluster. -Manage the policy from the web console by accessing the list of created policies in the *NodeNetworkConfigurationPolicy* page under the *Networking* menu. This page enables you to create, update, monitor, and delete the policies. \ No newline at end of file + +Manage the policy from the web console by accessing the list of created policies in the *NodeNetworkConfigurationPolicy* page under the *Networking* menu. This page enables you to create, update, monitor, and delete the policies. diff --git a/modules/virt-node-placement-rule-examples.adoc b/modules/virt-node-placement-rule-examples.adoc index b160b235cc40..9c594a342996 100644 --- a/modules/virt-node-placement-rule-examples.adoc +++ b/modules/virt-node-placement-rule-examples.adoc @@ -7,9 +7,11 @@ = Node placement rule examples ifndef::openshift-rosa,openshift-dedicated[] +[role="_abstract"] You can specify node placement rules for a {VirtProductName} component by editing a `Subscription`, `HyperConverged`, or `HostPathProvisioner` object. endif::openshift-rosa,openshift-dedicated[] ifdef::openshift-rosa,openshift-dedicated[] +[role="_abstract"] You can specify node placement rules for a {VirtProductName} component by editing a `HyperConverged` or `HostPathProvisioner` object. endif::openshift-rosa,openshift-dedicated[] @@ -23,7 +25,8 @@ Currently, you cannot configure node placement rules for the `Subscription` obje The `Subscription` object does not support the `affinity` node pplacement rule. -.Example `Subscription` object with `nodeSelector` rule +Example `Subscription` object with `nodeSelector` rule: + [source,yaml,subs="attributes+"] ---- apiVersion: operators.coreos.com/v1alpha1 @@ -43,7 +46,8 @@ spec: ---- <1> OLM deploys the {VirtProductName} Operators on nodes labeled `example.io/example-infra-key = example-infra-value`. -.Example `Subscription` object with `tolerations` rule +Example `Subscription` object with `tolerations` rule: + [source,yaml,subs="attributes+"] ---- apiVersion: operators.coreos.com/v1alpha1 @@ -72,7 +76,8 @@ endif::openshift-rosa,openshift-dedicated[] To specify the nodes where {VirtProductName} deploys its components, you can edit the `nodePlacement` object in the HyperConverged custom resource (CR) file that you create during {VirtProductName} installation. -.Example `HyperConverged` object with `nodeSelector` rule +Example `HyperConverged` object with `nodeSelector` rule: + [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1beta1 @@ -93,7 +98,8 @@ spec: <1> Infrastructure resources are placed on nodes labeled `example.io/example-infra-key = example-infra-value`. <2> workloads are placed on nodes labeled `example.io/example-workloads-key = example-workloads-value`. -.Example `HyperConverged` object with `affinity` rule +Example `HyperConverged` object with `affinity` rule: + [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1beta1 @@ -137,7 +143,8 @@ spec: <2> workloads are placed on nodes labeled `example.io/example-workloads-key = example-workloads-value`. <3> Nodes that have more than eight CPUs are preferred for workloads, but if they are not available, pods are still scheduled. -.Example `HyperConverged` object with `tolerations` rule +Example `HyperConverged` object with `tolerations` rule: + [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1beta1 @@ -170,7 +177,8 @@ After you deploy a virtual machine (VM) with the hostpath provisioner (HPP) stor You can configure node placement rules by specifying `nodeSelector`, `affinity`, or `tolerations` for the `spec.workload` field of the `HostPathProvisioner` object that you create when you install the hostpath provisioner. -.Example `HostPathProvisioner` object with `nodeSelector` rule +Example `HostPathProvisioner` object with `nodeSelector` rule: + [source,yaml] ---- apiVersion: hostpathprovisioner.kubevirt.io/v1beta1 diff --git a/modules/virt-optimizing-clone-performance-at-scale-in-openshift-data-foundation.adoc b/modules/virt-optimizing-clone-performance-at-scale-in-openshift-data-foundation.adoc index 65b190c0c5fd..9f816749de7e 100644 --- a/modules/virt-optimizing-clone-performance-at-scale-in-openshift-data-foundation.adoc +++ b/modules/virt-optimizing-clone-performance-at-scale-in-openshift-data-foundation.adoc @@ -6,12 +6,17 @@ [id="virt-optimizing-clone-performance-at-scale-in-openshift-data-foundation_{context}"] = Optimizing clone Performance at scale in {rh-storage} -When you use {rh-storage}, the storage profile configures the default cloning strategy as `csi-clone`. However, this method has limitations, as shown in the following link. After a certain number of clones are created from a persistent volume claim (PVC), a background flattening process begins, which can significantly reduce clone creation performance at scale. +[role="_abstract"] +When you use {rh-storage}, the storage profile configures the default cloning strategy as `csi-clone`. However, this method has limitations, as shown in the following link. + +After a certain number of clones are created from a persistent volume claim (PVC), a background flattening process begins, which can significantly reduce clone creation performance at scale. To improve performance when creating hundreds of clones from a single source PVC, use the `VolumeSnapshot` cloning method instead of the default `csi-clone` strategy. .Procedure -Create a `VolumeSnapshot` custom resource (CR) of the source image by using the following content: + +. Create a `VolumeSnapshot` custom resource (CR) of the source image by using the following content: ++ [source,yaml] ---- apiVersion: snapshot.storage.k8s.io/v1 @@ -26,7 +31,7 @@ spec: ---- . Add the `spec.source.snapshot` stanza to reference the `VolumeSnapshot` as the source for the `DataVolume clone`: - ++ [source,yaml] ---- spec: @@ -34,4 +39,4 @@ spec: snapshot: namespace: golden-ns name: golden-volumesnapshot ----- \ No newline at end of file +---- diff --git a/modules/virt-options-configuring-mdevs.adoc b/modules/virt-options-configuring-mdevs.adoc index b419b79906f9..8ce89d1487c5 100644 --- a/modules/virt-options-configuring-mdevs.adoc +++ b/modules/virt-options-configuring-mdevs.adoc @@ -6,6 +6,7 @@ [id="virt-options-configuring-mdevs_{context}"] = Options for configuring mediated devices +[role="_abstract"] There are two available methods for configuring mediated devices when using the NVIDIA GPU Operator. The method that Red Hat tests uses {VirtProductName} features to schedule mediated devices, while the NVIDIA method only uses the GPU Operator. Using the NVIDIA GPU Operator to configure mediated devices:: @@ -24,7 +25,6 @@ Setting this feature gate as described in the link:https://docs.nvidia.com/datac ==== * You must configure your `ClusterPolicy` manifest so that it matches the following example: + -.Example manifest [source,yaml] ---- kind: ClusterPolicy diff --git a/modules/virt-organize-vms-web.adoc b/modules/virt-organize-vms-web.adoc index 79a3add78053..1b56060f3329 100644 --- a/modules/virt-organize-vms-web.adoc +++ b/modules/virt-organize-vms-web.adoc @@ -7,6 +7,7 @@ [id="virt-organize-vms-web_{context}"] = Organizing virtual machines by using the web console +[role="_abstract"] In addition to creating virtual machines (VMs) in different projects, you can use the tree view to further organize them in folders. .Procedure diff --git a/modules/virt-overriding-cpu-and-memory-defaults.adoc b/modules/virt-overriding-cpu-and-memory-defaults.adoc index b8f69f98c73f..cbd4ed33d498 100644 --- a/modules/virt-overriding-cpu-and-memory-defaults.adoc +++ b/modules/virt-overriding-cpu-and-memory-defaults.adoc @@ -6,6 +6,7 @@ [id="virt-overriding-cpu-and-memory-defaults_{context}"] = Overriding CPU and memory defaults +[role="_abstract"] Modify the default settings for CPU and memory requests and limits for your use case by adding the `spec.resourceRequirements.storageWorkloads` stanza to the `HyperConverged` custom resource (CR). .Prerequisites diff --git a/modules/virt-overriding-default-fs-overhead-value.adoc b/modules/virt-overriding-default-fs-overhead-value.adoc index b0931d353458..4bd3467568ef 100644 --- a/modules/virt-overriding-default-fs-overhead-value.adoc +++ b/modules/virt-overriding-default-fs-overhead-value.adoc @@ -6,6 +6,7 @@ [id="virt-overriding-default-fs-overhead-value_{context}"] = Overriding the default file system overhead value +[role="_abstract"] Change the amount of persistent volume claim (PVC) space that the {VirtProductName} reserves for file system overhead by editing the `spec.filesystemOverhead` attribute of the `HCO` object. .Prerequisites diff --git a/modules/virt-pausing-vm-web.adoc b/modules/virt-pausing-vm-web.adoc index cd8525cf9599..a033059a8a9c 100644 --- a/modules/virt-pausing-vm-web.adoc +++ b/modules/virt-pausing-vm-web.adoc @@ -6,6 +6,7 @@ [id="virt-pausing-vm-web_{context}"] = Pausing a virtual machine +[role="_abstract"] You can pause a virtual machine (VM) from the web console. .Procedure diff --git a/modules/virt-policy-attributes.adoc b/modules/virt-policy-attributes.adoc index 31b8d773416c..e25ca3d2c17e 100644 --- a/modules/virt-policy-attributes.adoc +++ b/modules/virt-policy-attributes.adoc @@ -7,6 +7,7 @@ [id="policy-attributes_{context}"] = Policy attributes +[role="_abstract"] You can schedule a virtual machine (VM) by specifying a policy attribute and a CPU feature that is matched for compatibility when the VM is scheduled on a node. A policy attribute specified for a VM determines how that VM is scheduled on a node. [cols="30,70"] diff --git a/modules/virt-preparing-container-disk-for-vms.adoc b/modules/virt-preparing-container-disk-for-vms.adoc index edc7231b0984..b39a7f74a1be 100644 --- a/modules/virt-preparing-container-disk-for-vms.adoc +++ b/modules/virt-preparing-container-disk-for-vms.adoc @@ -6,6 +6,7 @@ [id="virt-preparing-container-disk-for-vms_{context}"] = Building and uploading a container disk +[role="_abstract"] You can build a virtual machine (VM) image into a container disk and upload it to a registry. The size of a container disk is limited by the maximum layer size of the registry where the container disk is hosted. diff --git a/modules/virt-preserving-lm-perms.adoc b/modules/virt-preserving-lm-perms.adoc index 3d81894e1282..90d9b3d6ae60 100644 --- a/modules/virt-preserving-lm-perms.adoc +++ b/modules/virt-preserving-lm-perms.adoc @@ -6,6 +6,7 @@ [id="virt-preserving-lm-perms_{context}"] = Preserving pre-4.19 live migration permissions during update +[role="_abstract"] Before you update to {VirtProductName} {VirtVersion}, you can create a temporary cluster role to preserve the previous live migration permissions until you are ready for the more restrictive default permissions to take effect. .Prerequisites @@ -78,4 +79,4 @@ $ oc create clusterrolebinding kvmigrate --clusterrole=kubevirt.io:migrate --use $ oc delete clusterrole kubevirt.io:upgrademigrate ---- + -After you delete the temporary cluster role, only users with the `kubevirt.io:migrate` role can create, delete, and update live migration requests. \ No newline at end of file +After you delete the temporary cluster role, only users with the `kubevirt.io:migrate` role can create, delete, and update live migration requests. diff --git a/modules/virt-preventing-nvidia-gpu-operands-from-deploying-on-nodes.adoc b/modules/virt-preventing-nvidia-gpu-operands-from-deploying-on-nodes.adoc index a2429f2f5d3b..b2c7f99a3846 100644 --- a/modules/virt-preventing-nvidia-gpu-operands-from-deploying-on-nodes.adoc +++ b/modules/virt-preventing-nvidia-gpu-operands-from-deploying-on-nodes.adoc @@ -7,7 +7,8 @@ [id="virt-preventing-nvidia-operands-from-deploying-on-nodes_{context}"] = Preventing NVIDIA GPU operands from deploying on nodes -If you use the link:https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/openshift/contents.html[NVIDIA GPU Operator] in your cluster, you can apply the `nvidia.com/gpu.deploy.operands=false` label to nodes that you do not want to configure for GPU or vGPU operands. This label prevents the creation of the pods that configure GPU or vGPU operands and terminates the pods if they already exist. +[role="_abstract"] +If you use the link:https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/openshift/contents.html[NVIDIA GPU Operator] in your cluster, you can apply the `nvidia.com/gpu.deploy.operands=false` label to nodes that you do not want to configure for GPU or vGPU operands. This prevents the creation of the pods that configure GPU or vGPU operands and terminates existing pods. .Prerequisites @@ -50,8 +51,8 @@ $ oc describe node $ oc get pods -n nvidia-gpu-operator ---- + -.Example output - +Example output: ++ [source,terminal] ---- NAME READY STATUS RESTARTS AGE @@ -71,8 +72,8 @@ nvidia-vfio-manager-zqtck 1/1 Terminating 0 9d $ oc get pods -n nvidia-gpu-operator ---- + -.Example output - +Example output: ++ [source,terminal] ---- NAME READY STATUS RESTARTS AGE diff --git a/modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc b/modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc index 36d37e788bba..586c1caad8ca 100644 --- a/modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc +++ b/modules/virt-preventing-workload-updates-during-control-plane-only-update.adoc @@ -6,6 +6,7 @@ [id="virt-preventing-workload-updates-during-control-plane-only-update_{context}"] = Preventing workload updates during a Control Plane Only update +[role="_abstract"] When you update from one Extended Update Support (EUS) version to the next, you must manually disable automatic workload updates to prevent {VirtProductName} from migrating or evicting workloads during the update process. [IMPORTANT] @@ -59,7 +60,8 @@ $ oc patch hyperconverged kubevirt-hyperconverged -n {CNVNamespace} \ --type json -p '[{"op":"replace","path":"/spec/workloadUpdateStrategy/workloadUpdateMethods", "value":[]}]' ---- + -.Example output +Example output: ++ [source,terminal] ---- hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched @@ -72,9 +74,9 @@ hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched $ oc get hyperconverged kubevirt-hyperconverged -n {CNVNamespace} -o json | jq ".status.conditions" ---- + -.Example output +Example output: ++ [%collapsible] -==== [source,json] ---- [ @@ -120,7 +122,6 @@ $ oc get hyperconverged kubevirt-hyperconverged -n {CNVNamespace} -o json | jq " } ] ---- -==== <1> The {VirtProductName} Operator has the `Upgradeable` status. . Manually update your cluster from the source EUS version to the next minor version of {product-title}: @@ -131,8 +132,9 @@ $ oc get hyperconverged kubevirt-hyperconverged -n {CNVNamespace} -o json | jq " $ oc adm upgrade ---- + -.Verification -* Check the current version by running the following command: +Verification: + +** Check the current version by running the following command: + [source,terminal] ---- @@ -162,7 +164,8 @@ $ oc get csv -n {CNVNamespace} $ oc get hyperconverged kubevirt-hyperconverged -n {CNVNamespace} -o json | jq ".status.versions" ---- + -.Example output +Example output: ++ [source,terminal,subs="attributes+"] ---- [ @@ -210,15 +213,16 @@ $ oc patch hyperconverged kubevirt-hyperconverged -n {CNVNamespace} --type json "[{\"op\":\"add\",\"path\":\"/spec/workloadUpdateStrategy/workloadUpdateMethods\", \"value\":{WorkloadUpdateMethodConfig}}]" ---- + -.Example output +Example output: ++ [source,terminal] ---- hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched ---- + -.Verification +Verification: -* Check the status of VM migration by running the following command: +** Check the status of VM migration by running the following command: + [source,terminal] ---- diff --git a/modules/virt-pxe-booting-with-mac-address.adoc b/modules/virt-pxe-booting-with-mac-address.adoc index 41d965bd5daa..7f7d67a9292e 100644 --- a/modules/virt-pxe-booting-with-mac-address.adoc +++ b/modules/virt-pxe-booting-with-mac-address.adoc @@ -6,8 +6,10 @@ [id="virt-pxe-booting-with-mac-address_{context}"] = PXE booting with a specified MAC address +[role="_abstract"] As an administrator, you can boot a client over the network by first creating a `NetworkAttachmentDefinition` object for your PXE network. Then, reference the network attachment definition in your virtual machine instance configuration file before you start the virtual machine instance. + You can also specify a MAC address in the virtual machine instance configuration file, if required by the PXE server. .Prerequisites @@ -116,7 +118,8 @@ networks: $ oc create -f vmi-pxe-boot.yaml ---- + -.Example output +Example output: ++ [source,terminal] ---- virtualmachineinstance.kubevirt.io "vmi-pxe-boot" created @@ -156,7 +159,8 @@ In this case, we used `eth1` for the PXE boot, without an IP address. The other $ ip addr ---- + -.Example output +Example output: ++ [source,terminal] ---- ... diff --git a/modules/virt-querying-metrics.adoc b/modules/virt-querying-metrics.adoc index 3e38776c3ebd..d4a9ff991b51 100644 --- a/modules/virt-querying-metrics.adoc +++ b/modules/virt-querying-metrics.adoc @@ -1,11 +1,12 @@ // Module included in the following assemblies: - // +// // * virt/support/virt-prometheus-queries.adoc :_mod-docs-content-type: REFERENCE [id="virt-querying-metrics_{context}"] = Virtualization metrics +[role="_abstract"] The following metric descriptions include example Prometheus Query Language (PromQL) queries. These metrics are not an API and might change between versions. For a complete list of virtualization metrics, see link:https://github.com/kubevirt/monitoring/blob/main/docs/metrics.md[KubeVirt components metrics]. diff --git a/modules/virt-querying-the-node-exporter-service-for-metrics.adoc b/modules/virt-querying-the-node-exporter-service-for-metrics.adoc index a9bf45806af1..3588aabbc7f0 100644 --- a/modules/virt-querying-the-node-exporter-service-for-metrics.adoc +++ b/modules/virt-querying-the-node-exporter-service-for-metrics.adoc @@ -6,6 +6,7 @@ [id="virt-querying-the-node-exporter-service-for-metrics-_{context}"] = Querying the node-exporter service for metrics +[role="_abstract"] Metrics are exposed for virtual machines through an HTTP service endpoint under the `/metrics` canonical name. When you query for metrics, Prometheus directly scrapes the metrics from the metrics endpoint exposed by the virtual machines and presents these metrics for viewing. .Prerequisites @@ -28,7 +29,8 @@ $ oc get service -n $ curl http://<172.30.226.162:9100>/metrics | grep -vE "^#|^$" ---- + -.Example output +Example output: ++ [source,terminal] ---- node_arp_entries{device="eth0"} 1 diff --git a/modules/virt-real-time-config-map-parameters.adoc b/modules/virt-real-time-config-map-parameters.adoc index 82caff8ca424..f6938bdff64b 100644 --- a/modules/virt-real-time-config-map-parameters.adoc +++ b/modules/virt-real-time-config-map-parameters.adoc @@ -6,7 +6,8 @@ [id="virt-real-time-config-map-parameters_{context}"] = Real-time checkup config map parameters -The following table shows the mandatory and optional parameters that you can set in the `data` stanza of the input `ConfigMap` manifest when you run a real-time checkup: +[role="_abstract"] +The following table shows the mandatory and optional parameters that you can set in the `data` stanza of the input `ConfigMap` manifest when you run a real-time checkup. .Real-time checkup config map input parameters [cols="1,1,1", options="header"] diff --git a/modules/virt-reclaiming-statically-provisioned-persistent-volumes.adoc b/modules/virt-reclaiming-statically-provisioned-persistent-volumes.adoc index af6c2c782a49..173b85a4088a 100644 --- a/modules/virt-reclaiming-statically-provisioned-persistent-volumes.adoc +++ b/modules/virt-reclaiming-statically-provisioned-persistent-volumes.adoc @@ -6,7 +6,8 @@ [id="virt-reclaiming-statically-provisioned-persistent-volumes_{context}"] = Reclaiming statically provisioned persistent volumes -Reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage. +[role="_abstract"] +You can reclaim a statically provisioned persistent volume (PV) by unbinding the persistent volume claim (PVC) and deleting the PV. You might also need to manually delete the shared storage. Reclaiming a statically provisioned PV is dependent on the underlying storage. This procedure provides a general approach that might need to be customized depending on your storage. diff --git a/modules/virt-regional-dr-odf.adoc b/modules/virt-regional-dr-odf.adoc index 96570ced3944..21c8d6d50684 100644 --- a/modules/virt-regional-dr-odf.adoc +++ b/modules/virt-regional-dr-odf.adoc @@ -6,11 +6,13 @@ [id="regional-dr-odf_{context}"] = Regional-DR for {rh-storage-first} +[role="_abstract"] {VirtProductName} supports the link:https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/latest/html-single/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/index#rdr-solution[Regional-DR solution for {rh-storage}], which provides asynchronous data replication at regular intervals between managed {VirtProductName} clusters installed on primary and secondary sites. -.Regional-DR differences +== Regional-DR differences + * Regional-DR supports higher network latency between the primary and secondary sites. * Regional-DR uses RBD snapshots to replicate data asynchronously. Currently, your applications must be resilient to small variances between VM disks. You can prevent these variances by using single disk VMs. * Using the import method when selecting a population source for your VM disk is recommended. However, you can protect VMs that use cloned PVCs if you select a `VolumeReplicationClass` that enables image flattening. For more information, see the {rh-storage} documentation. -For more information about using the Regional-DR solution for {rh-storage} with {VirtProductName}, see {ibm-title}'s {rh-storage} Regional-DR documentation. \ No newline at end of file +For more information about using the Regional-DR solution for {rh-storage} with {VirtProductName}, see {ibm-title}'s {rh-storage} Regional-DR documentation. diff --git a/modules/virt-remove-boot-order-item-web.adoc b/modules/virt-remove-boot-order-item-web.adoc index 0f7aab2f7b6f..7ef2e3192eeb 100644 --- a/modules/virt-remove-boot-order-item-web.adoc +++ b/modules/virt-remove-boot-order-item-web.adoc @@ -8,6 +8,7 @@ = Removing items from a boot order list in the web console +[role="_abstract"] Remove items from a boot order list by using the web console. .Procedure @@ -21,7 +22,7 @@ Remove items from a boot order list by using the web console. . Click the pencil icon that is located on the right side of *Boot Order*. . Click the *Remove* icon {delete} next to the item. The item is removed from the boot order list and saved in the list of available boot sources. If you remove all items from the boot order list, the following message displays: *No resource selected. VM will attempt to boot from disks by order of appearance in YAML file.* - ++ [NOTE] ==== If the virtual machine is running, changes to *Boot Order* will not take effect until you restart the virtual machine. diff --git a/modules/virt-removing-custom-mcp.adoc b/modules/virt-removing-custom-mcp.adoc index c9a75c4fdc2a..2ea78f2add8a 100644 --- a/modules/virt-removing-custom-mcp.adoc +++ b/modules/virt-removing-custom-mcp.adoc @@ -6,6 +6,7 @@ [id="virt-removing-custom-mcp_{context}"] = Removing a custom machine config pool for high-availability clusters +[role="_abstract"] You can delete a custom machine config pool that you previously created for your high-availability cluster. .Prerequisites diff --git a/modules/virt-removing-interface-from-nodes.adoc b/modules/virt-removing-interface-from-nodes.adoc index 23b634588a41..2ed66d435559 100644 --- a/modules/virt-removing-interface-from-nodes.adoc +++ b/modules/virt-removing-interface-from-nodes.adoc @@ -6,6 +6,7 @@ [id="virt-removing-interface-from-nodes_{context}"] = Removing an interface from nodes +[role="_abstract"] You can remove an interface from one or more nodes in the cluster by editing the `NodeNetworkConfigurationPolicy` object and setting the `state` of the interface to `absent`. Removing an interface from a node does not automatically restore the node network configuration to a previous state. If you want to restore the previous state, you will need to define that node network configuration in the policy. diff --git a/modules/virt-removing-mediated-device-from-cluster-cli.adoc b/modules/virt-removing-mediated-device-from-cluster-cli.adoc index fbc7fd90287e..02c6a9b9d7f7 100644 --- a/modules/virt-removing-mediated-device-from-cluster-cli.adoc +++ b/modules/virt-removing-mediated-device-from-cluster-cli.adoc @@ -6,6 +6,7 @@ [id="virt-removing-mediated-device-from-cluster-cli_{context}"] = Removing mediated devices from the cluster +[role="_abstract"] To remove a mediated device from the cluster, delete the information for that device from the `HyperConverged` custom resource (CR). .Prerequisites @@ -23,7 +24,6 @@ $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} . Remove the device information from the `spec.mediatedDevicesConfiguration` and `spec.permittedHostDevices` stanzas of the `HyperConverged` CR. Removing both entries ensures that you can later create a new mediated device type on the same node. For example: + -.Example configuration file [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1 diff --git a/modules/virt-removing-pci-device-from-cluster-cli.adoc b/modules/virt-removing-pci-device-from-cluster-cli.adoc index ad7776d43f77..8bc7e8e45401 100644 --- a/modules/virt-removing-pci-device-from-cluster-cli.adoc +++ b/modules/virt-removing-pci-device-from-cluster-cli.adoc @@ -6,6 +6,7 @@ [id="virt-removing-pci-device-from-cluster_{context}"] = Removing PCI host devices from the cluster using the CLI +[role="_abstract"] To remove a PCI host device from the cluster, delete the information for that device from the `HyperConverged` custom resource (CR). .Prerequisites @@ -21,7 +22,8 @@ $ oc edit hyperconverged kubevirt-hyperconverged -n {CNVNamespace} . Remove the PCI device information from the `spec.permittedHostDevices.pciHostDevices` array by deleting the `pciDeviceSelector`, `resourceName` and `externalResourceProvider` (if applicable) fields for the appropriate device. In this example, the `intel.com/qat` resource has been deleted. + -.Example configuration file +Example configuration file: ++ [source,yaml,subs="attributes+"] ---- apiVersion: hco.kubevirt.io/v1 @@ -49,7 +51,8 @@ spec: $ oc describe node ---- + -.Example output +Example output: ++ [source,terminal] ---- Capacity: diff --git a/modules/virt-removing-vm-delete-protection.adoc b/modules/virt-removing-vm-delete-protection.adoc index be61bf7e2fe6..d6abc1f24f96 100644 --- a/modules/virt-removing-vm-delete-protection.adoc +++ b/modules/virt-removing-vm-delete-protection.adoc @@ -7,6 +7,7 @@ = Removing the VM delete protection option +[role="_abstract"] When you enable delete protection on a virtual machine (VM), you ensure that the VM cannot be inadvertently deleted. You can also disable the protection for a VM. As a cluster administrator, you can choose not to make the VM delete protection option available. VMs with delete protection already enabled retain that setting; for any new VMs that are created, enabling the option is not allowed. @@ -22,7 +23,6 @@ You can remove the delete protection option by establishing a validation admissi . Create the validation admission policy, as shown in the following example: + -.Example validation admission policy file [source,yaml] ---- apiVersion: admissionregistration.k8s.io/v1 @@ -58,7 +58,6 @@ $ oc apply -f disable-vm-delete-protection.yaml . Create the validation admission policy binding, as shown in the following example: + -.Example validation admission policy binding file [source,yaml] ---- apiVersion: admissionregistration.k8s.io/v1 diff --git a/modules/virt-removing-wasp-agent.adoc b/modules/virt-removing-wasp-agent.adoc index 9813bdcb9948..19e4288b9c3c 100644 --- a/modules/virt-removing-wasp-agent.adoc +++ b/modules/virt-removing-wasp-agent.adoc @@ -6,6 +6,7 @@ [id="virt-removing-wasp-agent_{context}"] = Removing the wasp-agent component +[role="_abstract"] If you no longer need memory overcommitment, you can remove the `wasp-agent` component and associated resources from your cluster. .Prerequisites @@ -84,4 +85,4 @@ No `wasp-agent` should be listed. $ oc debug node/ -- free -m ---- + -Ensure that the `Swap:` row shows `0` or that no swap space shows as provisioned. \ No newline at end of file +Ensure that the `Swap:` row shows `0` or that no swap space shows as provisioned. diff --git a/modules/virt-restarting-vm-web.adoc b/modules/virt-restarting-vm-web.adoc index 97012b278d2e..636cc0c9c90c 100644 --- a/modules/virt-restarting-vm-web.adoc +++ b/modules/virt-restarting-vm-web.adoc @@ -6,6 +6,7 @@ [id="virt-restarting-vm-web_{context}"] = Restarting a virtual machine +[role="_abstract"] You can restart a running virtual machine (VM) from the web console. [IMPORTANT] diff --git a/modules/virt-restoring-vm-from-snapshot-cli.adoc b/modules/virt-restoring-vm-from-snapshot-cli.adoc index 784f62baaf8b..7c2eb943be6f 100644 --- a/modules/virt-restoring-vm-from-snapshot-cli.adoc +++ b/modules/virt-restoring-vm-from-snapshot-cli.adoc @@ -6,6 +6,7 @@ [id="virt-restoring-vm-from-snapshot-cli_{context}"] = Restoring a VM from a snapshot by using the CLI +[role="_abstract"] You can restore an existing virtual machine (VM) to a previous configuration by using the command line. You can only restore from an offline VM snapshot. .Prerequisites @@ -56,7 +57,8 @@ The snapshot controller updates the status fields of the `VirtualMachineRestore` $ oc get vmrestore ---- + -.Example output +Example output: ++ [source, yaml] ---- apiVersion: snapshot.kubevirt.io/v1beta1 diff --git a/modules/virt-restoring-vm-from-snapshot-web.adoc b/modules/virt-restoring-vm-from-snapshot-web.adoc index 04454b1c0884..33711fd96059 100644 --- a/modules/virt-restoring-vm-from-snapshot-web.adoc +++ b/modules/virt-restoring-vm-from-snapshot-web.adoc @@ -6,6 +6,7 @@ [id="virt-restoring-vm-from-snapshot-web_{context}"] = Restoring a VM from a snapshot by using the web console +[role="_abstract"] You can restore a virtual machine (VM) to a previous configuration represented by a snapshot in the {product-title} web console. .Procedure diff --git a/modules/virt-rhel-9.adoc b/modules/virt-rhel-9.adoc index ab6fc6b3001f..be305294d3ca 100644 --- a/modules/virt-rhel-9.adoc +++ b/modules/virt-rhel-9.adoc @@ -6,6 +6,7 @@ [id="virt-rhel-9_{context}"] = {op-system-base} 9 compatibility +[role="_abstract"] {VirtProductName} {VirtVersion} is based on {op-system-base-full} 9. You can update to {VirtProductName} {VirtVersion} from a version that was based on {op-system-base} 8 by following the standard {VirtProductName} update procedure. No additional steps are required. As in previous versions, you can perform the update without disrupting running workloads. {VirtProductName} {VirtVersion} supports live migration from {op-system-base} 8 nodes to {op-system-base} 9 nodes. @@ -20,4 +21,4 @@ Updating {VirtProductName} does not change the `machineType` value of any existi [IMPORTANT] ==== Before you change a VM's `machineType` value, you must shut down the VM. -==== \ No newline at end of file +==== diff --git a/modules/virt-running-real-time-checkup.adoc b/modules/virt-running-real-time-checkup.adoc index 2bdf135a1c33..2cd15cfb45b8 100644 --- a/modules/virt-running-real-time-checkup.adoc +++ b/modules/virt-running-real-time-checkup.adoc @@ -6,7 +6,8 @@ [id="virt-running-real-time-checkup_{context}"] = Running a real-time checkup -Use a predefined checkup to verify that your {product-title} cluster can run virtualized real-time workloads. +[role="_abstract"] +You can use a predefined checkup to verify that your {product-title} cluster can run virtualized real-time workloads. .Prerequisites @@ -15,11 +16,11 @@ Use a predefined checkup to verify that your {product-title} cluster can run vir .Procedure -. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest file for the real-time checkup: +. Create a `ServiceAccount`, `Role`, and `RoleBinding` manifest file for the real-time checkup. ++ +Example service account, role, and rolebinding manifest file: + -.Example service account, role, and rolebinding manifest file [%collapsible] -==== [source,yaml] ---- --- @@ -76,7 +77,6 @@ roleRef: kind: Role name: kubevirt-realtime-checker ---- -==== . Apply the `ServiceAccount`, `Role`, and `RoleBinding` manifest to the target namespace: + @@ -85,9 +85,10 @@ roleRef: $ oc apply -n -f .yaml ---- -. Create a `ConfigMap` manifest file that contains the input parameters for the checkup: +. Create a `ConfigMap` manifest file that contains the input parameters for the checkup. ++ +Example input config map: + -.Example input config map [source,yaml] ---- apiVersion: v1 @@ -112,9 +113,10 @@ data: $ oc apply -n -f .yaml ---- -. Create a `Job` manifest to run the checkup: +. Create a `Job` manifest to run the checkup. ++ +Example job manifest: + -.Example job manifest [source,yaml,subs="attributes+"] ---- apiVersion: batch/v1 @@ -172,7 +174,8 @@ $ oc wait job realtime-checkup -n --for condition=complete -- $ oc get configmap realtime-checkup-config -n -o yaml ---- + -.Example output config map (success) +Example output config map (success): ++ [source,yaml] ---- apiVersion: v1 diff --git a/modules/virt-running-ssp-pipeline-cli.adoc b/modules/virt-running-ssp-pipeline-cli.adoc index 7fd409e6d855..acb99ab803af 100644 --- a/modules/virt-running-ssp-pipeline-cli.adoc +++ b/modules/virt-running-ssp-pipeline-cli.adoc @@ -6,6 +6,7 @@ [id="virt-running-tto-pipeline-cli_{context}"] = Running the example pipelines using the CLI +[role="_abstract"] Use a `PipelineRun` resource to run the example pipelines. A `PipelineRun` object is the running instance of a pipeline. It instantiates a pipeline for execution with specific inputs, outputs, and execution parameters on a cluster. It also creates a `TaskRun` object for each task in the pipeline. .Prerequisites diff --git a/modules/virt-running-ssp-pipeline-web.adoc b/modules/virt-running-ssp-pipeline-web.adoc index a572ec00f888..d24ced7bcdd2 100644 --- a/modules/virt-running-ssp-pipeline-web.adoc +++ b/modules/virt-running-ssp-pipeline-web.adoc @@ -6,6 +6,7 @@ [id="virt-running-tto-pipeline-web_{context}"] = Running the example pipelines using the web console +[role="_abstract"] You can run the example pipelines from the *Pipelines* menu in the web console. .Procedure diff --git a/modules/virt-runstrategies-vms.adoc b/modules/virt-runstrategies-vms.adoc index 83d6758f92ee..7b70e291bdf1 100644 --- a/modules/virt-runstrategies-vms.adoc +++ b/modules/virt-runstrategies-vms.adoc @@ -6,7 +6,8 @@ [id="virt-runstrategies-vms_{context}"] = Run strategies -The `spec.runStrategy` key has four possible values: +[role="_abstract"] +The `spec.runStrategy` key has four possible values: `Always`, `RerunOnFailure`, `Manual`, and `Halted`. `Always`:: The virtual machine instance (VMI) is always present when a virtual machine (VM) is created on another node. A new VMI is created if the original stops for any reason. diff --git a/modules/virt-schedule-cpu-host-model-vms.adoc b/modules/virt-schedule-cpu-host-model-vms.adoc index 64e2afaa6344..e70d6f9c950f 100644 --- a/modules/virt-schedule-cpu-host-model-vms.adoc +++ b/modules/virt-schedule-cpu-host-model-vms.adoc @@ -6,6 +6,7 @@ [id="virt-schedule-cpu-host-model-vms_{context}"] = Scheduling virtual machines with the host model +[role="_abstract"] When the CPU model for a virtual machine (VM) is set to `host-model`, the VM inherits the CPU model of the node where it is scheduled. .Procedure diff --git a/modules/virt-schedule-supported-cpu-model-vms.adoc b/modules/virt-schedule-supported-cpu-model-vms.adoc index 09019f2157c8..35d76952218a 100644 --- a/modules/virt-schedule-supported-cpu-model-vms.adoc +++ b/modules/virt-schedule-supported-cpu-model-vms.adoc @@ -6,6 +6,7 @@ [id="virt-schedule-supported-cpu-model-vms_{context}"] = Scheduling virtual machines with the supported CPU model +[role="_abstract"] You can configure a CPU model for a virtual machine (VM) to schedule it on a node where its CPU model is supported. .Procedure diff --git a/modules/virt-searching-vmis-web.adoc b/modules/virt-searching-vmis-web.adoc index fed3568b4181..7e73b539de79 100644 --- a/modules/virt-searching-vmis-web.adoc +++ b/modules/virt-searching-vmis-web.adoc @@ -6,6 +6,7 @@ [id="virt-searching-vmis-web_{context}"] = Searching for standalone virtual machine instances by using the web console +[role="_abstract"] You can search for virtual machine instances (VMIs) by using the search bar on the *VirtualMachines* page. Use the advanced search to apply additional filters. .Procedure @@ -27,4 +28,4 @@ You can search for virtual machine instances (VMIs) by using the search bar on t . Optional: If Advanced Cluster Management (ACM) is installed, use the *Cluster* dropdown to search across multiple clusters. -. Optional: Click the *Save search* icon to store your search in the `kubevirt-user-settings` ConfigMap. \ No newline at end of file +. Optional: Click the *Save search* icon to store your search in the `kubevirt-user-settings` ConfigMap. diff --git a/modules/virt-selecting-migration-network-ui.adoc b/modules/virt-selecting-migration-network-ui.adoc index ae4df99c0678..18a085f348da 100644 --- a/modules/virt-selecting-migration-network-ui.adoc +++ b/modules/virt-selecting-migration-network-ui.adoc @@ -7,6 +7,7 @@ [id="virt-selecting-migration-network-ui_{context}"] = Selecting a dedicated network by using the web console +[role="_abstract"] You can select a dedicated network for live migration by using the {product-title} web console. .Prerequisites @@ -18,4 +19,4 @@ You can select a dedicated network for live migration by using the {product-titl . Navigate to *Virtualization > Overview* in the {product-title} web console. . Click the *Settings* tab and then click *Live migration*. -. Select the network from the *Live migration network* list. \ No newline at end of file +. Select the network from the *Live migration network* list. diff --git a/modules/virt-set-guest-log-single-vm-cli.adoc b/modules/virt-set-guest-log-single-vm-cli.adoc index 90f42d860503..8a5556604894 100644 --- a/modules/virt-set-guest-log-single-vm-cli.adoc +++ b/modules/virt-set-guest-log-single-vm-cli.adoc @@ -6,6 +6,7 @@ [id="virt-set-guest-log-single-vm-cli_{context}"] = Setting guest system log access for a single VM with the CLI +[role="_abstract"] You can configure access to VM guest system logs for a single VM by editing the `VirtualMachine` CR. This setting takes precedence over the cluster-wide default configuration. .Prerequisites diff --git a/modules/virt-set-guest-log-single-vm-web.adoc b/modules/virt-set-guest-log-single-vm-web.adoc index 796a75da4516..ced686df736a 100644 --- a/modules/virt-set-guest-log-single-vm-web.adoc +++ b/modules/virt-set-guest-log-single-vm-web.adoc @@ -6,6 +6,7 @@ [id="virt-set-guest-log-single-vm-web_{context}"] = Setting guest system log access for a single VM with the web console +[role="_abstract"] You can configure access to VM guest system logs for a single VM by using the web console. This setting takes precedence over the cluster-wide default configuration. .Procedure @@ -16,4 +17,4 @@ You can configure access to VM guest system logs for a single VM by using the we . Click the *Configuration* tab. -. Set *Guest system log access* to on or off. \ No newline at end of file +. Set *Guest system log access* to on or off. diff --git a/modules/virt-setting-cpu-allocation-ratio.adoc b/modules/virt-setting-cpu-allocation-ratio.adoc index 51518b8ff1f7..b1f91a94bb29 100644 --- a/modules/virt-setting-cpu-allocation-ratio.adoc +++ b/modules/virt-setting-cpu-allocation-ratio.adoc @@ -6,6 +6,7 @@ [id="virt-setting-cpu-allocation-ratio_{context}"] = Setting the CPU allocation ratio +[role="_abstract"] The CPU Allocation Ratio specifies the degree of overcommitment by mapping vCPUs to time slices of physical CPUs. For example, a mapping or ratio of 10:1 maps 10 virtual CPUs to 1 physical CPU by using time slices. @@ -18,8 +19,6 @@ To change the default number of vCPUs mapped to each physical CPU, set the `vmiC .Procedure -Set the `vmiCPUAllocationRatio` value in the `HyperConverged` CR to define a node CPU allocation ratio. - . Open the `HyperConverged` CR in your default editor by running the following command: + [source,terminal] diff --git a/modules/virt-setting-policy-attributes.adoc b/modules/virt-setting-policy-attributes.adoc index 44a83a73c71e..769b312b6ede 100644 --- a/modules/virt-setting-policy-attributes.adoc +++ b/modules/virt-setting-policy-attributes.adoc @@ -6,6 +6,7 @@ [id="virt-setting-policy-attributes_{context}"] = Setting a policy attribute and CPU feature +[role="_abstract"] You can set a policy attribute and CPU feature for each virtual machine (VM) to ensure that it is scheduled on a node according to policy and feature. The CPU feature that you set is verified to ensure that it is supported by the host CPU or emulated by the hypervisor. .Procedure diff --git a/modules/virt-setting-resource-quota-limits-for-vms.adoc b/modules/virt-setting-resource-quota-limits-for-vms.adoc index 3cd6169deb7e..2f1d871ddd4a 100644 --- a/modules/virt-setting-resource-quota-limits-for-vms.adoc +++ b/modules/virt-setting-resource-quota-limits-for-vms.adoc @@ -6,6 +6,7 @@ [id="virt-setting-resource-quota-limits-for-vms_{context}"] = Setting resource quota limits for virtual machines +[role="_abstract"] By default, {VirtProductName} automatically manages CPU and memory limits for virtual machines (VMs) if a namespace enforces resource quotas that require limits to be set. The memory limit is automatically set to twice the requested memory and the CPU limit is set to one per vCPU. You can customize the memory limit ratio for a specific namespace by adding the `alpha.kubevirt.io/auto-memory-limits-ratio` label to the namespace. For example, the following command sets the memory limit ratio to 1.2: diff --git a/modules/virt-sno-differences.adoc b/modules/virt-sno-differences.adoc index 5f009d5d8915..1f6726d2befc 100644 --- a/modules/virt-sno-differences.adoc +++ b/modules/virt-sno-differences.adoc @@ -6,6 +6,7 @@ [id="virt-sno-differences_{context}"] = {sno-caps} differences +[role="_abstract"] You can install {VirtProductName} on {sno}. However, you should be aware that {sno-caps} does not support the following features: diff --git a/modules/virt-specializing-windows-sysprep.adoc b/modules/virt-specializing-windows-sysprep.adoc index 5dcf31d955ae..050aab02bd8a 100644 --- a/modules/virt-specializing-windows-sysprep.adoc +++ b/modules/virt-specializing-windows-sysprep.adoc @@ -6,6 +6,7 @@ [id="virt-specializing-windows-sysprep_{context}"] = Specializing a Windows VM image +[role="_abstract"] Specializing a Windows virtual machine (VM) configures the computer-specific information from a generalized Windows image onto the VM. .Prerequisites @@ -24,4 +25,6 @@ Specializing a Windows virtual machine (VM) configures the computer-specific inf . In the *Sysprep* section, click *Edit*, browse to the `unattend.xml` answer file, and click *Save*. . Click *Create VirtualMachine*. +.Result + During the initial boot, Windows uses the `unattend.xml` answer file to specialize the VM. The VM is now ready to use. diff --git a/modules/virt-starting-vm-web.adoc b/modules/virt-starting-vm-web.adoc index fef8532639b0..3eb96cf006ad 100644 --- a/modules/virt-starting-vm-web.adoc +++ b/modules/virt-starting-vm-web.adoc @@ -6,6 +6,7 @@ [id="virt-starting-vm-web_{context}"] = Starting a virtual machine +[role="_abstract"] You can start a virtual machine (VM) from the web console. .Procedure @@ -31,7 +32,7 @@ You can start a virtual machine (VM) from the web console. .. Access the *VirtualMachine details* page by clicking the name of the VM. .. Click *Actions* -> *Start*. - ++ [NOTE] ==== When you start VM that is provisioned from a `URL` source for the first time, the VM has a status of *Importing* while {VirtProductName} imports the container from the URL endpoint. Depending on the size of the image, this process might take several minutes. diff --git a/modules/virt-stopping-vm-web.adoc b/modules/virt-stopping-vm-web.adoc index d4d82767f7f3..7e6f155c2c01 100644 --- a/modules/virt-stopping-vm-web.adoc +++ b/modules/virt-stopping-vm-web.adoc @@ -6,6 +6,7 @@ [id="virt-stopping-vm-web_{context}"] = Stopping a virtual machine +[role="_abstract"] You can stop a virtual machine (VM) from the web console. .Procedure @@ -31,4 +32,4 @@ You can stop a virtual machine (VM) from the web console. .. Access the *VirtualMachine details* page by clicking the name of the VM. .. Click *Actions* → *Stop*. -.. If action confirmation is enabled, click *Stop* in the confirmation dialog. \ No newline at end of file +.. If action confirmation is enabled, click *Stop* in the confirmation dialog. diff --git a/modules/virt-storage-checkup-web-console.adoc b/modules/virt-storage-checkup-web-console.adoc index 737d89fe8b2b..33d09b398b7e 100644 --- a/modules/virt-storage-checkup-web-console.adoc +++ b/modules/virt-storage-checkup-web-console.adoc @@ -6,7 +6,8 @@ [id="virt-storage-checkup-web-console_{context}"] = Running a storage checkup by using the web console -Run a storage checkup to validate that storage is working correctly for virtual machines. +[role="_abstract"] +You can run a storage checkup to validate that storage is working correctly for virtual machines. .Procedure @@ -18,4 +19,6 @@ Run a storage checkup to validate that storage is working correctly for virtual . Enter a timeout value for the checkup in the *Timeout (minutes)* fields. . Click *Run*. -You can view the status of the storage checkup in the *Checkups* list on the *Storage* tab. Click on the name of the checkup for more details. \ No newline at end of file +.Result + +You can view the status of the storage checkup in the *Checkups* list on the *Storage* tab. Click on the name of the checkup for more details. diff --git a/modules/virt-storage-pv-csi-overview.adoc b/modules/virt-storage-pv-csi-overview.adoc index acc22940552d..904dd4a026f2 100644 --- a/modules/virt-storage-pv-csi-overview.adoc +++ b/modules/virt-storage-pv-csi-overview.adoc @@ -6,6 +6,9 @@ [id="virt-storage-vp-csi-overview_{context}"] = Virtual machine CSI storage overview -{VirtProductName} integrates with the Container Storage Interface (CSI) to manage VM storage. Storage classes define storage capabilities such as performance tiers and types. PersistentVolumeClaims (PVCs) request storage resources, which bind to PersistentVolumes (PVs). CSI drivers connect Kubernetes to vendor storage backends, including iSCSI, NFS, and Fibre Channel. +[role="_abstract"] +{VirtProductName} integrates with the Container Storage Interface (CSI) to manage VM storage. -image:virt-storage-csi-paradigm.png[title="Virtual machine disks and the CSI paradigm"] \ No newline at end of file +Storage classes define storage capabilities such as performance tiers and types. PersistentVolumeClaims (PVCs) request storage resources, which bind to PersistentVolumes (PVs). CSI drivers connect Kubernetes to vendor storage backends, including iSCSI, NFS, and Fibre Channel. + +image:virt-storage-csi-paradigm.png[title="Virtual machine disks and the CSI paradigm"] diff --git a/modules/virt-storage-rbac-roles.adoc b/modules/virt-storage-rbac-roles.adoc index 9b27512c7793..c4b1952a7afa 100644 --- a/modules/virt-storage-rbac-roles.adoc +++ b/modules/virt-storage-rbac-roles.adoc @@ -6,6 +6,7 @@ [id="virt-storage-rbac-roles_{context}"] = RBAC roles for storage features in {VirtProductName} +[role="_abstract"] The following permissions are granted to the Containerized Data Importer (CDI), including the `cdi-operator` and `cdi-controller` service accounts. [id="cluster-wide-rbac-roles-cdi"] diff --git a/modules/virt-subscribing-cli.adoc b/modules/virt-subscribing-cli.adoc index c9628c24f60d..91b0b2502504 100644 --- a/modules/virt-subscribing-cli.adoc +++ b/modules/virt-subscribing-cli.adoc @@ -6,6 +6,7 @@ [id="virt-subscribing-cli_{context}"] = Subscribing to the {VirtProductName} catalog by using the CLI +[role="_abstract"] Before you install {VirtProductName}, you must subscribe to the {VirtProductName} catalog. Subscribing gives the `{CNVNamespace}` namespace access to the {VirtProductName} Operators. To subscribe, configure `Namespace`, `OperatorGroup`, and `Subscription` objects by applying a single manifest to your cluster. diff --git a/modules/virt-supported-cluster-version.adoc b/modules/virt-supported-cluster-version.adoc index 847570a89375..0813b1906591 100644 --- a/modules/virt-supported-cluster-version.adoc +++ b/modules/virt-supported-cluster-version.adoc @@ -7,13 +7,14 @@ [id="virt-supported-cluster-version_{context}"] = Supported cluster versions for {VirtProductName} -The latest stable release of {VirtProductName} {VirtVersion} is {HCOVersion}. - +[role="_abstract"] {VirtProductName} {VirtVersion} is supported for use on {product-title} {product-version} clusters. To use the latest z-stream release of {VirtProductName}, you must first upgrade to the latest version of {product-title}. +The latest stable release of {VirtProductName} {VirtVersion} is {HCOVersion}. + ifdef::openshift-rosa,openshift-rosa-hcp[] [NOTE] ==== {VirtProductName} is currently available on x86-64 CPUs. Arm-based nodes are not yet supported. ==== -endif::openshift-rosa,openshift-rosa-hcp[] \ No newline at end of file +endif::openshift-rosa,openshift-rosa-hcp[] diff --git a/modules/virt-supported-ssp-tasks.adoc b/modules/virt-supported-ssp-tasks.adoc index 11a455e5582f..e0659f179e87 100644 --- a/modules/virt-supported-ssp-tasks.adoc +++ b/modules/virt-supported-ssp-tasks.adoc @@ -6,6 +6,7 @@ [id="virt-supported-ssp-tasks_{context}"] = Supported virtual machine tasks +[role="_abstract"] The following table shows the supported tasks. .Supported virtual machine tasks @@ -44,4 +45,4 @@ The following table shows the supported tasks. [NOTE] ==== Virtual machine creation in pipelines now utilizes `ClusterInstanceType` and `ClusterPreference` instead of template-based tasks, which have been deprecated. The `create-vm-from-template`, `copy-template`, and `modify-vm-template` commands remain available but are not used in default pipeline tasks. -==== \ No newline at end of file +==== diff --git a/modules/virt-template-fields-for-boot-source.adoc b/modules/virt-template-fields-for-boot-source.adoc index c746f12a4db5..6b66feed0320 100644 --- a/modules/virt-template-fields-for-boot-source.adoc +++ b/modules/virt-template-fields-for-boot-source.adoc @@ -5,6 +5,7 @@ [id="virt-template-fields-for-boot-source_{context}"] = Virtual machine template fields for adding a boot source +[role="_abstract"] The following table describes the fields for *Add boot source to template* window. This window displays when you click *Add source* for a virtual machine template on the *Virtualization* -> *Templates* page. [cols="2a,3a,5a"] diff --git a/modules/virt-temporary-token-VNC.adoc b/modules/virt-temporary-token-VNC.adoc index f4676fe417bf..67aabecfe9a0 100644 --- a/modules/virt-temporary-token-VNC.adoc +++ b/modules/virt-temporary-token-VNC.adoc @@ -6,6 +6,7 @@ [id="virt-temporary-token-VNC_{context}"] = Generating a temporary token for the VNC console +[role="_abstract"] To access the VNC of a virtual machine (VM), generate a temporary authentication bearer token for the Kubernetes API. [NOTE] @@ -50,7 +51,7 @@ Sample output: ---- $ export VNC_TOKEN="" ---- - ++ You can now use the token to access the VNC console of a VM. .Verification diff --git a/modules/virt-tested-maximums.adoc b/modules/virt-tested-maximums.adoc index f100b9b159fa..f971becc97d3 100644 --- a/modules/virt-tested-maximums.adoc +++ b/modules/virt-tested-maximums.adoc @@ -6,6 +6,7 @@ [id="virt-tested-maximums_{context}"] = Tested maximums for {VirtProductName} +[role="_abstract"] The following limits apply to a large-scale {VirtProductName} 4.x environment. They are based on a single cluster of the largest possible size. When you plan an environment, remember that multiple smaller clusters might be the best option for your use case. [id="vm-maximums_{context}"] diff --git a/modules/virt-troubleshoot-storage-checkup.adoc b/modules/virt-troubleshoot-storage-checkup.adoc index 443d9aff4279..d667583fefe3 100644 --- a/modules/virt-troubleshoot-storage-checkup.adoc +++ b/modules/virt-troubleshoot-storage-checkup.adoc @@ -23,7 +23,8 @@ If a storage checkup fails, there are steps that you can take to identify the re $ oc get configmap storage-checkup-config -n -o yaml ---- + -.Example output config map +Example output config map: ++ [source,yaml] ---- apiVersion: v1 diff --git a/modules/virt-troubleshooting-cert-rotation-parameters.adoc b/modules/virt-troubleshooting-cert-rotation-parameters.adoc index 2539b18f8ca3..556984437fb6 100644 --- a/modules/virt-troubleshooting-cert-rotation-parameters.adoc +++ b/modules/virt-troubleshooting-cert-rotation-parameters.adoc @@ -6,7 +6,10 @@ [id="virt-troubleshooting-cert-rotation-parameters_{context}"] = Troubleshooting certificate rotation parameters -Deleting one or more `certConfig` values causes them to revert to the default values, unless the default values conflict with one of the following conditions: +[role="_abstract"] +Deleting one or more `certConfig` values causes them to revert to the default values, unless the default values conflict with one of the specific conditions. If the default values conflict with these conditions, you will receive an error. + +The conditions are: * The value of `ca.renewBefore` must be less than or equal to the value of `ca.duration`. @@ -14,12 +17,10 @@ Deleting one or more `certConfig` values causes them to revert to the default va * The value of `server.renewBefore` must be less than or equal to the value of `server.duration`. - -If the default values conflict with these conditions, you will receive an error. - If you remove the `server.duration` value in the following example, the default value of `24h0m0s` is greater than the value of `ca.duration`, conflicting with the specified conditions. -.Example +Example: + [source,yaml] ---- certConfig: diff --git a/modules/virt-troubleshooting-incorrect-policy-config.adoc b/modules/virt-troubleshooting-incorrect-policy-config.adoc index ffa5dcc54b0d..f076a6b84451 100644 --- a/modules/virt-troubleshooting-incorrect-policy-config.adoc +++ b/modules/virt-troubleshooting-incorrect-policy-config.adoc @@ -6,9 +6,10 @@ [id="virt-troubleshooting-incorrect-policy-config_{context}"] = Troubleshooting an incorrect node network configuration policy configuration -You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. +[role="_abstract"] +You can apply changes to the node network configuration across your entire cluster by applying a node network configuration policy. If you applied an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy. -If you applied an incorrect configuration, you can use the following example to troubleshoot and correct the failed node network policy. The example attempts to apply a Linux bridge policy to a cluster that has three control plane nodes and three compute nodes. The policy is not applied because the policy references the wrong interface. +The example attempts to apply a Linux bridge policy to a cluster that has three control plane nodes and three compute nodes. The policy is not applied because the policy references the wrong interface. To find an error, you need to investigate the available NMState resources. You can then update the policy with the correct configuration. @@ -53,7 +54,8 @@ spec: $ oc apply -f ens01-bridge-testfail.yaml ---- + -.Example output +Example output: ++ [source,terminal] ---- nodenetworkconfigurationpolicy.nmstate.io/ens01-bridge-testfail created @@ -68,7 +70,6 @@ $ oc get nncp + The output shows that the policy failed: + -.Example output [source,terminal] ---- NAME STATUS @@ -86,7 +87,6 @@ $ oc get nnce + The output shows that the policy failed on all nodes: + -.Example output [source,terminal] ---- NAME STATUS @@ -105,7 +105,8 @@ compute-3.ens01-bridge-testfail FailedToConfigure $ oc get nnce compute-1.ens01-bridge-testfail -o jsonpath='{.status.conditions[?(@.type=="Failing")].message}' ---- + -.Example output +Example output: ++ [source,terminal] ---- [2024-10-10T08:40:46Z INFO nmstatectl] Nmstate version: 2.2.37 @@ -122,7 +123,6 @@ $ oc get nns control-plane-1 -o yaml + The output shows that the interface name on the nodes is `ens1` but the failed policy incorrectly uses `ens01`: + -.Example output [source,yaml] ---- - ipv4: @@ -155,7 +155,8 @@ Save the policy to apply the correction. $ oc get nncp ---- + -.Example output +Example output: ++ [source,terminal] ---- NAME STATUS diff --git a/modules/virt-unpausing-vm-web.adoc b/modules/virt-unpausing-vm-web.adoc index 6dac143b7114..2b878fe732e6 100644 --- a/modules/virt-unpausing-vm-web.adoc +++ b/modules/virt-unpausing-vm-web.adoc @@ -6,6 +6,7 @@ [id="virt-unpausing-vm-web_{context}"] = Unpausing a virtual machine +[role="_abstract"] You can unpause a paused virtual machine (VM) from the web console. .Prerequisites diff --git a/modules/virt-updating-multiple-vms.adoc b/modules/virt-updating-multiple-vms.adoc index 1b9451e190d2..a5146e5b76ed 100644 --- a/modules/virt-updating-multiple-vms.adoc +++ b/modules/virt-updating-multiple-vms.adoc @@ -6,6 +6,7 @@ [id="virt-updating-multiple-vms_{context}"] = Updating multiple virtual machines +[role="_abstract"] You can use the command line interface (CLI) to update multiple virtual machines (VMs) at the same time. .Prerequisites diff --git a/modules/virt-updating-virtio-drivers-windows.adoc b/modules/virt-updating-virtio-drivers-windows.adoc index 0e9a9a6d364f..e7ce69c8b5a7 100644 --- a/modules/virt-updating-virtio-drivers-windows.adoc +++ b/modules/virt-updating-virtio-drivers-windows.adoc @@ -7,7 +7,8 @@ [id="virt-updating-virtio-drivers-windows_{context}"] = Updating VirtIO drivers on a Windows VM -Update the `virtio` drivers on a Windows virtual machine (VM) by using the Windows Update service. +[role="_abstract"] +You can update the `virtio` drivers on a Windows virtual machine (VM) by using the Windows Update service. .Prerequisites @@ -25,4 +26,4 @@ Update the `virtio` drivers on a Windows virtual machine (VM) by using the Windo . On the Windows VM, navigate to the *Device Manager*. . Select a device. . Select the *Driver* tab. -. Click *Driver Details* and confirm that the `virtio` driver details displays the correct version. \ No newline at end of file +. Click *Driver Details* and confirm that the `virtio` driver details displays the correct version. diff --git a/modules/virt-uploading-image-virtctl.adoc b/modules/virt-uploading-image-virtctl.adoc index 5bff30a0f540..38b7523e2e77 100644 --- a/modules/virt-uploading-image-virtctl.adoc +++ b/modules/virt-uploading-image-virtctl.adoc @@ -6,6 +6,7 @@ [id="virt-uploading-image-virtctl_{context}"] = Creating a VM from an uploaded image by using the CLI +[role="_abstract"] You can upload an operating system image by using the `virtctl` command-line tool. You can use an existing data volume or create a new data volume for the image. .Prerequisites diff --git a/modules/virt-uploading-image-web.adoc b/modules/virt-uploading-image-web.adoc index ee498c8b099a..39bfbdc21bb4 100644 --- a/modules/virt-uploading-image-web.adoc +++ b/modules/virt-uploading-image-web.adoc @@ -6,7 +6,8 @@ [id="virt-uploading-image-web_{context}"] = Uploading an image file using the web console -Use the web console to upload an image file to a new persistent volume claim (PVC). +[role="_abstract"] +You can use the web console to upload an image file to a new persistent volume claim (PVC). You can later use this PVC to attach the image to new virtual machines. .Prerequisites diff --git a/modules/virt-uploading-local-disk-image-dv.adoc b/modules/virt-uploading-local-disk-image-dv.adoc index bef634d302d0..95889685dda7 100644 --- a/modules/virt-uploading-local-disk-image-dv.adoc +++ b/modules/virt-uploading-local-disk-image-dv.adoc @@ -6,6 +6,7 @@ [id="virt-uploading-local-disk-image-dv_{context}"] = Uploading a local disk image to a data volume +[role="_abstract"] You can use the `virtctl` CLI utility to upload a local disk image from a client machine to a data volume (DV) in your cluster. You can use a DV that already exists in your cluster or create a new DV during this procedure. diff --git a/modules/virt-using-NUMA.adoc b/modules/virt-using-NUMA.adoc index 809c63829ff8..39a313c176ac 100644 --- a/modules/virt-using-NUMA.adoc +++ b/modules/virt-using-NUMA.adoc @@ -6,6 +6,7 @@ [id="virt-using-NUMA_{context}"] = Using NUMA topology with {VirtProductName} +[role="_abstract"] You must enable the NUMA functionality for {VirtProductName} VMs to prevent performance degradation on nodes with multiple NUMA zones. This feature is vital for high-performance and latency-sensitive workloads. Without NUMA awareness, a VM's virtual CPUs might run on one physical NUMA zone, while its memory is allocated on another. This "cross-node" communication significantly increases latency and reduces memory bandwidth, and can cause the interconnect buses which link the NUMA zones to become a bottleneck. diff --git a/modules/virt-using-flags-specify.adoc b/modules/virt-using-flags-specify.adoc index ca58e352b323..906fcea0614f 100644 --- a/modules/virt-using-flags-specify.adoc +++ b/modules/virt-using-flags-specify.adoc @@ -6,7 +6,8 @@ [id="virt-using-flags-specify_{context}"] = Using flags to specify instance types and preferences -Specify instance types and preferences by using flags. +[role="_abstract"] +You can specify instance types and preferences by using flags. .Prerequisites @@ -28,4 +29,4 @@ $ virtctl create vm --instancetype --preference --preference virtualmachinepreference/ ----- \ No newline at end of file +---- diff --git a/modules/virt-using-skip-node.adoc b/modules/virt-using-skip-node.adoc index dbdc6325d7ff..9fa0b9da9a50 100644 --- a/modules/virt-using-skip-node.adoc +++ b/modules/virt-using-skip-node.adoc @@ -7,6 +7,7 @@ [id="virt-using-skip-node_{context}"] = Using skip-node annotation +[role="_abstract"] If you want the `node-labeller` to skip a node, annotate that node by using the `oc` CLI. .Prerequisites diff --git a/modules/virt-using-virt-must-gather.adoc b/modules/virt-using-virt-must-gather.adoc index c7f9d1f184a8..002c1f6c7c17 100644 --- a/modules/virt-using-virt-must-gather.adoc +++ b/modules/virt-using-virt-must-gather.adoc @@ -8,6 +8,7 @@ [id="virt-using-virt-must-gather_{context}"] = Using the must-gather tool for {VirtProductName} +[role="_abstract"] You can collect data about {VirtProductName} resources by running the `must-gather` command with the {VirtProductName} image. The default data collection includes information about the following resources: diff --git a/modules/virt-using-virtctl-port-forward-command.adoc b/modules/virt-using-virtctl-port-forward-command.adoc index 47e12288cbe3..89d166315f93 100644 --- a/modules/virt-using-virtctl-port-forward-command.adoc +++ b/modules/virt-using-virtctl-port-forward-command.adoc @@ -6,6 +6,7 @@ [id="virt-using-virtctl-port-forward-command_{context}"] = Using the virtctl port-forward command +[role="_abstract"] You can use your local OpenSSH client and the `virtctl port-forward` command to connect to a running virtual machine (VM). You can use this method with Ansible to automate the configuration of VMs. This method is recommended for low-traffic applications because port-forwarding traffic is sent over the control plane. This method is not recommended for high-traffic applications such as Rsync or Remote Desktop Protocol because it places a heavy burden on the API server. diff --git a/modules/virt-using-virtctl-ssh-command.adoc b/modules/virt-using-virtctl-ssh-command.adoc index efbafb8bdfd6..6a56acf77300 100644 --- a/modules/virt-using-virtctl-ssh-command.adoc +++ b/modules/virt-using-virtctl-ssh-command.adoc @@ -6,6 +6,7 @@ [id="virt-using-virtctl-ssh-command_{context}"] = Using the virtctl ssh command +[role="_abstract"] You can access a running virtual machine (VM) by using the `virtcl ssh` command. .Prerequisites @@ -25,8 +26,9 @@ $ virtctl -n ssh @example-vm -i <1> ---- <1> Specify the namespace, user name, and the SSH private key. The default SSH key location is `/home/user/.ssh`. If you save the key in a different location, you must specify the path. + -.Example +Example: ++ [source,terminal] ---- $ virtctl -n my-namespace ssh cloud-user@example-vm -i my-key ----- \ No newline at end of file +---- diff --git a/modules/virt-using-wasp-agent-to-configure-higher-vm-workload-density.adoc b/modules/virt-using-wasp-agent-to-configure-higher-vm-workload-density.adoc index 6cda69a351bb..e846ffeac640 100644 --- a/modules/virt-using-wasp-agent-to-configure-higher-vm-workload-density.adoc +++ b/modules/virt-using-wasp-agent-to-configure-higher-vm-workload-density.adoc @@ -6,6 +6,7 @@ [id="virt-using-wasp-agent-to-configure-higher-vm-workload-density_{context}"] = Using wasp-agent to increase VM workload density +[role="_abstract"] The `wasp-agent` component facilitates memory overcommitment by assigning swap resources to worker nodes. It also manages pod evictions when nodes are at risk due to high swap I/O traffic or high utilization. [IMPORTANT] @@ -34,7 +35,6 @@ The `wasp-agent` component deploys an Open Container Initiative (OCI) hook to en . Configure the `kubelet` service to permit swap usage: .. Create or edit a `KubeletConfig` file with the parameters shown in the following example: + -.Example of a `KubeletConfig` file [source,yaml] ---- apiVersion: machineconfiguration.openshift.io/v1 @@ -115,7 +115,8 @@ To have enough swap space for the worst-case scenario, make sure to have at leas NODE_SWAP_SPACE = NODE_RAM * (MEMORY_OVER_COMMIT_PERCENT / 100% - 1) ---- + -.Example +Example: ++ [source,terminal] ---- NODE_SWAP_SPACE = 16 GB * (150% / 100% - 1) @@ -299,7 +300,8 @@ $ oc -n openshift-cnv patch HyperConverged/kubevirt-hyperconverged --type='json' ]' ---- + -.Successful output +Successful output: ++ [source,terminal] ---- hyperconverged.hco.kubevirt.io/kubevirt-hyperconverged patched @@ -317,7 +319,8 @@ $ oc rollout status ds wasp-agent -n wasp + If the deployment is successful, the following message is displayed: + -.Example output +Example output: ++ [source, terminal] ---- daemon set "wasp-agent" successfully rolled out @@ -355,7 +358,8 @@ If swap is provisioned, an amount greater than zero is displayed in the `Swap:` $ oc -n openshift-cnv get HyperConverged/kubevirt-hyperconverged -o jsonpath='{.spec.higherWorkloadDensity}{"\n"}' ---- + -.Example output +Example output: ++ [source,terminal] ---- {"memoryOvercommitPercentage":150} diff --git a/modules/virt-verify-status-bootsource-update.adoc b/modules/virt-verify-status-bootsource-update.adoc index f86094776d3c..f22a754b9030 100644 --- a/modules/virt-verify-status-bootsource-update.adoc +++ b/modules/virt-verify-status-bootsource-update.adoc @@ -7,6 +7,7 @@ [id="virt-verify-status-bootsource-update_{context}"] = Verifying the status of a boot source +[role="_abstract"] You can determine if a boot source is system-defined or custom by viewing the `HyperConverged` custom resource (CR). .Prerequisites @@ -22,8 +23,8 @@ You can determine if a boot source is system-defined or custom by viewing the `H $ oc get hyperconverged kubevirt-hyperconverged -n {CNVNamespace} -o yaml ---- + -.Example output - +Example output: ++ [source,yaml] ---- apiVersion: hco.kubevirt.io/v1beta1 diff --git a/modules/virt-verifying-online-snapshot-creation-with-snapshot-indications.adoc b/modules/virt-verifying-online-snapshot-creation-with-snapshot-indications.adoc index e5a7b38d7be0..3ebfb83b7636 100644 --- a/modules/virt-verifying-online-snapshot-creation-with-snapshot-indications.adoc +++ b/modules/virt-verifying-online-snapshot-creation-with-snapshot-indications.adoc @@ -6,6 +6,7 @@ [id="virt-verifying-online-snapshot-creation-with-snapshot-indications_{context}"] = Verifying online snapshots by using snapshot indications +[role="_abstract"] Snapshot indications are contextual information about online virtual machine (VM) snapshot operations. Indications are not available for offline virtual machine (VM) snapshot operations. Indications are helpful in describing details about the online snapshot creation. .Prerequisites diff --git a/modules/virt-view-guest-system-logs-cli.adoc b/modules/virt-view-guest-system-logs-cli.adoc index 22e49500484f..ccb2ffb5caa9 100644 --- a/modules/virt-view-guest-system-logs-cli.adoc +++ b/modules/virt-view-guest-system-logs-cli.adoc @@ -6,6 +6,7 @@ [id="virt-view-guest-system-logs-cli_{context}"] = Viewing guest system logs with the CLI +[role="_abstract"] You can view the serial console logs of a VM guest by running the `oc logs` command. .Prerequisites diff --git a/modules/virt-view-guest-system-logs-web.adoc b/modules/virt-view-guest-system-logs-web.adoc index 4c741e20461f..12bc1b5a630b 100644 --- a/modules/virt-view-guest-system-logs-web.adoc +++ b/modules/virt-view-guest-system-logs-web.adoc @@ -6,6 +6,7 @@ [id="virt-view-guest-system-logs-web_{context}"] = Viewing guest system logs with the web console +[role="_abstract"] You can view the serial console logs of a virtual machine (VM) guest by using the web console. .Prerequisites @@ -20,4 +21,4 @@ You can view the serial console logs of a virtual machine (VM) guest by using th . Click the *Diagnostics* tab. -. Click *Guest system logs* to load the serial console. \ No newline at end of file +. Click *Guest system logs* to load the serial console. diff --git a/modules/virt-viewing-automatically-created-storage-profiles.adoc b/modules/virt-viewing-automatically-created-storage-profiles.adoc index d1a1dee0de10..09ad00c50996 100644 --- a/modules/virt-viewing-automatically-created-storage-profiles.adoc +++ b/modules/virt-viewing-automatically-created-storage-profiles.adoc @@ -6,7 +6,8 @@ [id="virt-viewing-automatically-created-storage-profiles_{context}"] = Viewing automatically created storage profiles -The system creates storage profiles for each storage class automatically. +[role="_abstract"] +The system creates storage profiles for each storage class automatically. You can view these storage class profiles by using the `oc` command. .Prerequisites @@ -27,7 +28,8 @@ $ oc get storageprofile $ oc describe storageprofile ---- + -.Example storage profile details +Example storage profile details: ++ [source,yaml] ---- Name: ocs-storagecluster-ceph-rbd-virtualization diff --git a/modules/virt-viewing-downward-metrics-cli.adoc b/modules/virt-viewing-downward-metrics-cli.adoc index c5f4e04014a5..3a338f3c0641 100644 --- a/modules/virt-viewing-downward-metrics-cli.adoc +++ b/modules/virt-viewing-downward-metrics-cli.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-downward-metrics-cli_{context}"] = Viewing downward metrics by using the CLI +[role="_abstract"] You can view downward metrics by entering a command from inside a guest virtual machine (VM). .Procedure @@ -20,4 +21,4 @@ $ sudo sh -c 'printf "GET /metrics/XML\n\n" > /dev/virtio-ports/org.github.vhost [source,terminal] ---- $ sudo cat /dev/virtio-ports/org.github.vhostmd.1 ----- \ No newline at end of file +---- diff --git a/modules/virt-viewing-downward-metrics-tool.adoc b/modules/virt-viewing-downward-metrics-tool.adoc index f8890092571f..904aa2bb3857 100644 --- a/modules/virt-viewing-downward-metrics-tool.adoc +++ b/modules/virt-viewing-downward-metrics-tool.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-downward-metrics-tool_{context}"] = Viewing downward metrics by using the vm-dump-metrics tool +[role="_abstract"] To view downward metrics, install the `vm-dump-metrics` tool and then use the tool to expose the metrics results. [NOTE] @@ -29,7 +30,8 @@ $ sudo dnf install -y vm-dump-metrics $ sudo vm-dump-metrics ---- + -.Example output +Example output: ++ [source,xml] ---- @@ -46,4 +48,4 @@ $ sudo vm-dump-metrics kubevirt.io ----- \ No newline at end of file +---- diff --git a/modules/virt-viewing-downward-metrics.adoc b/modules/virt-viewing-downward-metrics.adoc index 24f2b2321241..7f12d9e0d56f 100644 --- a/modules/virt-viewing-downward-metrics.adoc +++ b/modules/virt-viewing-downward-metrics.adoc @@ -2,16 +2,14 @@ // // * virt/monitoring/virt-using-downward-metrics.adoc -:_mod-docs-content-type: PROCEDURE +:_mod-docs-content-type: CONCEPT [id="virt-viewing-downward-metrics_{context}"] = Viewing downward metrics -You can view downward metrics by using either of the following options: - -* The command-line interface (CLI) -* The `vm-dump-metrics` tool +[role="_abstract"] +You can view downward metrics by using either the command-line interface (CLI), or the `vm-dump-metrics` tool. [NOTE] ==== On Red Hat Enterprise Linux (RHEL) 9, use the command line to view downward metrics. The vm-dump-metrics tool is not supported on the Red Hat Enterprise Linux (RHEL) 9 platform. -==== \ No newline at end of file +==== diff --git a/modules/virt-viewing-graphical-representation-of-network-state-of-node-console.adoc b/modules/virt-viewing-graphical-representation-of-network-state-of-node-console.adoc index f2a95a8b5859..158e7c38d5a6 100644 --- a/modules/virt-viewing-graphical-representation-of-network-state-of-node-console.adoc +++ b/modules/virt-viewing-graphical-representation-of-network-state-of-node-console.adoc @@ -6,9 +6,12 @@ [id="virt-viewing-graphical-representation-of-network-state-of-node-console_{context}"] = Viewing a graphical representation of the network state of a node (NNS) topology from the web console -To make the configuration of the node network in the cluster easier to understand, you can view it in the form of a diagram. The NNS topology diagram displays all node components (network interface controllers, bridges, bonds, and VLANs), their properties and configurations, and connections between the nodes. +[role="_abstract"] +To make the configuration of the node network in the cluster easier to understand, you can view it in the form of a diagram. -To open the topology view of the cluster, use the following steps: +The NNS topology diagram displays all node components (network interface controllers, bridges, bonds, and VLANs), their properties and configurations, and connections between the nodes. + +.Procedure * In the *Administrator* view of the {product-title} web console, navigate to *Networking* -> *Node Network Configuration*. + diff --git a/modules/virt-viewing-list-of-nodenetworkstate-resources-console.adoc b/modules/virt-viewing-list-of-nodenetworkstate-resources-console.adoc index c9ae617ebe30..1c7b5ec1b71c 100644 --- a/modules/virt-viewing-list-of-nodenetworkstate-resources-console.adoc +++ b/modules/virt-viewing-list-of-nodenetworkstate-resources-console.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-list-of-nodenetworkstate-resources-console_{context}"] = Viewing the list of NodeNetworkState resources +[role="_abstract"] As an administrator, you can use the {product-title} web console to view the list of `NodeNetworkState` resources and network interfaces, and access network details. .Procedure diff --git a/modules/virt-viewing-logs-cli.adoc b/modules/virt-viewing-logs-cli.adoc index 8a4e40547097..6bae65d6fe9b 100644 --- a/modules/virt-viewing-logs-cli.adoc +++ b/modules/virt-viewing-logs-cli.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-logs-cli_{context}"] = Viewing {VirtProductName} pod logs with the CLI +[role="_abstract"] You can view logs for the {VirtProductName} pods by using the `oc` CLI tool. .Prerequisites @@ -21,9 +22,9 @@ You can view logs for the {VirtProductName} pods by using the `oc` CLI tool. $ oc get pods -n {CNVNamespace} ---- + -.Example output +Example output: ++ [%collapsible] -==== [source,terminal] ---- NAME READY STATUS RESTARTS AGE @@ -38,7 +39,6 @@ virt-handler-9qs6z 1/1 Running 0 30m virt-operator-7ccfdbf65f-q5snk 1/1 Running 0 32m virt-operator-7ccfdbf65f-vllz8 1/1 Running 0 32m ---- -==== . View the pod log by running the following command: + @@ -54,9 +54,9 @@ If a pod fails to start, you can use the `--previous` option to view logs from t To monitor log output in real time, use the `-f` option. ==== + -.Example output +Example output: ++ [%collapsible] -==== [source,terminal] ---- {"component":"virt-handler","level":"info","msg":"set verbosity to 2","pos":"virt-handler.go:453","timestamp":"2022-04-17T08:58:37.373695Z"} @@ -66,4 +66,3 @@ To monitor log output in real time, use the `-f` option. {"component":"virt-handler","level":"warning","msg":"host model mode is expected to contain only one model","pos":"cpu_plugin.go:103","timestamp":"2022-04-17T08:58:37.390263Z"} {"component":"virt-handler","level":"info","msg":"node-labeller is running","pos":"node_labeller.go:94","timestamp":"2022-04-17T08:58:37.391011Z"} ---- -==== diff --git a/modules/virt-viewing-logs-loki.adoc b/modules/virt-viewing-logs-loki.adoc index b6a249175062..0347c3a4dd2c 100644 --- a/modules/virt-viewing-logs-loki.adoc +++ b/modules/virt-viewing-logs-loki.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-logs-loki_{context}"] = Viewing aggregated {VirtProductName} logs with the LokiStack +[role="_abstract"] You can view aggregated logs for {VirtProductName} pods and containers by using the LokiStack in the web console. .Prerequisites diff --git a/modules/virt-viewing-network-state-of-node.adoc b/modules/virt-viewing-network-state-of-node.adoc index e6730f7ed187..63709ab74936 100644 --- a/modules/virt-viewing-network-state-of-node.adoc +++ b/modules/virt-viewing-network-state-of-node.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-network-state-of-node_{context}"] = Viewing the network state of a node by using the CLI +[role="_abstract"] Node network state is the network configuration for all nodes in the cluster. A `NodeNetworkState` object exists on every node in the cluster. This object is periodically updated and captures the state of the network for that node. .Prerequisites @@ -28,7 +29,8 @@ $ oc get nns $ oc get nns node01 -o yaml ---- + -.Example output +Example output: ++ [source,yaml] ---- apiVersion: nmstate.io/v1 diff --git a/modules/virt-viewing-outdated-workloads.adoc b/modules/virt-viewing-outdated-workloads.adoc index 427e285ed835..35af67eca34d 100644 --- a/modules/virt-viewing-outdated-workloads.adoc +++ b/modules/virt-viewing-outdated-workloads.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-outdated-workloads_{context}"] = Viewing outdated VM workloads +[role="_abstract"] You can view a list of outdated virtual machine (VM) workloads by using the CLI. [NOTE] diff --git a/modules/virt-viewing-virt-launcher-pod-logs-web.adoc b/modules/virt-viewing-virt-launcher-pod-logs-web.adoc index 643ffaf1a31a..9ba14aa713f5 100644 --- a/modules/virt-viewing-virt-launcher-pod-logs-web.adoc +++ b/modules/virt-viewing-virt-launcher-pod-logs-web.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-virt-launcher-pod-logs-web_{context}"] = Viewing virt-launcher pod logs with the web console +[role="_abstract"] You can view the `virt-launcher` pod logs for a virtual machine by using the {product-title} web console. .Procedure diff --git a/modules/virt-viewing-vmi-ip-cli.adoc b/modules/virt-viewing-vmi-ip-cli.adoc index 05045cb930de..cba583f0b8e9 100644 --- a/modules/virt-viewing-vmi-ip-cli.adoc +++ b/modules/virt-viewing-vmi-ip-cli.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-vmi-ip-cli_{context}"] = Viewing the IP address of a virtual machine by using the CLI +[role="_abstract"] You can view the IP address of a virtual machine (VM) by using the command line. [NOTE] @@ -26,7 +27,8 @@ You must install the QEMU guest agent on a VM to view the IP address of a second $ oc describe vmi ---- + -.Example output +Example output: ++ [source,yaml] ---- # ... diff --git a/modules/virt-viewing-vmi-ip-web.adoc b/modules/virt-viewing-vmi-ip-web.adoc index 02a90b9a8e38..7a16172d037a 100644 --- a/modules/virt-viewing-vmi-ip-web.adoc +++ b/modules/virt-viewing-vmi-ip-web.adoc @@ -6,6 +6,7 @@ [id="virt-viewing-vmi-ip-web_{context}"] = Viewing the IP address of a virtual machine by using the web console +[role="_abstract"] You can view the IP address of a virtual machine (VM) by using the {product-title} web console. [NOTE] diff --git a/modules/virt-virtctl-information-commands.adoc b/modules/virt-virtctl-information-commands.adoc index ab8f81847ea5..2d114a746056 100644 --- a/modules/virt-virtctl-information-commands.adoc +++ b/modules/virt-virtctl-information-commands.adoc @@ -5,6 +5,7 @@ [id="virtctl-information-commands_{context}"] = virtctl information commands +[role="_abstract"] You can use the following `virtctl` information commands to view information about the `virtctl` client. .Information commands diff --git a/modules/virt-vm-behavior-dr.adoc b/modules/virt-vm-behavior-dr.adoc index 4356fc8779f9..e5aff63ee264 100644 --- a/modules/virt-vm-behavior-dr.adoc +++ b/modules/virt-vm-behavior-dr.adoc @@ -6,9 +6,9 @@ [id="virt-vm-behavior-dr_{context}"] = VM behavior during disaster recovery scenarios +[role="_abstract"] VMs typically act similarly to pod-based workloads during both relocate and failover disaster recovery flows. -[discrete] [id="dr-relocate_{context}"] == Relocate @@ -16,10 +16,9 @@ Use relocate to move an application from the primary environment to the secondar Because the VM terminates gracefully, there is no data loss. Therefore, the VM operating system will not perform crash recovery. -[discrete] [id="dr-failover_{context}"] == Failover Use failover when there is a critical failure in the primary environment that makes it impractical or impossible to use relocation to move the workload to a secondary environment. When failover is executed, the storage is fenced from the primary environment, the I/O to the VM disks is abruptly halted, and the VM restarts in the secondary environment using the replicated data. -You should expect data loss due to failover. The extent of loss depends on whether you use Metro-DR, which uses synchronous replication, or Regional-DR, which uses asynchronous replication. Because Regional-DR uses snapshot-based replication intervals, the window of data loss is proportional to the replication interval length. When the VM restarts, the operating system might perform crash recovery. \ No newline at end of file +You should expect data loss due to failover. The extent of loss depends on whether you use Metro-DR, which uses synchronous replication, or Regional-DR, which uses asynchronous replication. Because Regional-DR uses snapshot-based replication intervals, the window of data loss is proportional to the replication interval length. When the VM restarts, the operating system might perform crash recovery. diff --git a/modules/virt-vm-connection-commands.adoc b/modules/virt-vm-connection-commands.adoc index ce6c9170036e..3b5f2bf211bb 100644 --- a/modules/virt-vm-connection-commands.adoc +++ b/modules/virt-vm-connection-commands.adoc @@ -5,6 +5,7 @@ [id="vm-connection-commands_{context}"] = VM connection commands +[role="_abstract"] You use can use the following `virtctl` commands to expose ports and connect to virtual machines (VMs) and VM instances (VMIs). .VM connection commands diff --git a/modules/virt-vm-creating-nic-web.adoc b/modules/virt-vm-creating-nic-web.adoc index 45d0d3dde299..e0b3353dc837 100644 --- a/modules/virt-vm-creating-nic-web.adoc +++ b/modules/virt-vm-creating-nic-web.adoc @@ -7,6 +7,7 @@ [id="virt-vm-creating-nic-web_{context}"] = Configuring a VM network interface by using the web console +[role="_abstract"] You can configure a network interface for a virtual machine (VM) by using the {product-title} web console. .Prerequisites diff --git a/modules/virt-vm-custom-scheduler.adoc b/modules/virt-vm-custom-scheduler.adoc index efafaee1392c..ca4e5cea8cef 100644 --- a/modules/virt-vm-custom-scheduler.adoc +++ b/modules/virt-vm-custom-scheduler.adoc @@ -6,6 +6,7 @@ [id="virt-vm-custom-scheduler_{context}"] = Scheduling virtual machines with a custom scheduler +[role="_abstract"] You can use a custom scheduler to schedule a virtual machine (VM) on a node. .Prerequisites @@ -49,7 +50,8 @@ spec: $ oc get pods ---- + -.Example output +Example output: ++ [source,terminal] ---- NAME READY STATUS RESTARTS AGE @@ -65,7 +67,8 @@ $ oc describe pod virt-launcher-vm-fedora-dpc87 + The value of the `From` field in the output verifies that the scheduler name matches the custom scheduler specified in the `VirtualMachine` manifest: + -.Example output +Example output: ++ [source,terminal] ---- [...] diff --git a/modules/virt-vm-export-commands.adoc b/modules/virt-vm-export-commands.adoc index 2ef0764c1ed6..bfa5538074c6 100644 --- a/modules/virt-vm-export-commands.adoc +++ b/modules/virt-vm-export-commands.adoc @@ -5,6 +5,7 @@ [id="vm-export-commands_{context}"] = VM export commands +[role="_abstract"] Use `virtctl vmexport` commands to create, download, or delete a volume exported from a VM, VM snapshot, or persistent volume claim (PVC). Certain manifests also contain a header secret, which grants access to the endpoint to import a disk image in a format that {VirtProductName} can use. .VM export commands diff --git a/modules/virt-vm-information-commands.adoc b/modules/virt-vm-information-commands.adoc index 2d1a49710b99..dd42e476a66c 100644 --- a/modules/virt-vm-information-commands.adoc +++ b/modules/virt-vm-information-commands.adoc @@ -5,6 +5,7 @@ [id="vm-information-commands_{context}"] = VM information commands +[role="_abstract"] You can use `virtctl` to view information about virtual machines (VMs) and virtual machine instances (VMIs). .VM information commands diff --git a/modules/virt-vm-management-commands.adoc b/modules/virt-vm-management-commands.adoc index f1b3c502e80a..67b779a6ce77 100644 --- a/modules/virt-vm-management-commands.adoc +++ b/modules/virt-vm-management-commands.adoc @@ -5,6 +5,7 @@ [id="vm-management-commands_{context}"] = VM management commands +[role="_abstract"] You can use the following `virtctl` commands to manage and migrate virtual machines (VMs) and VM instances (VMIs). .VM management commands diff --git a/modules/virt-vm-manifest-creation-commands.adoc b/modules/virt-vm-manifest-creation-commands.adoc index 2ee61a37533e..b3fcded6f33f 100644 --- a/modules/virt-vm-manifest-creation-commands.adoc +++ b/modules/virt-vm-manifest-creation-commands.adoc @@ -5,6 +5,7 @@ [id="vm-manifest-creation-commands_{context}"] = VM manifest creation commands +[role="_abstract"] You can use the following `virtctl create` commands to create manifests for virtual machines, instance types, and preferences. .VM manifest creation commands diff --git a/modules/virt-vm-migration-tuning.adoc b/modules/virt-vm-migration-tuning.adoc index ab5249e25939..4aa2167acf5a 100644 --- a/modules/virt-vm-migration-tuning.adoc +++ b/modules/virt-vm-migration-tuning.adoc @@ -7,7 +7,10 @@ [id="virt-vm-migration-tuning_{context}"] = VM migration tuning -You can adjust your cluster-wide live migration settings based on the type of workload and migration scenario. This enables you to control how many VMs migrate at the same time, the network bandwidth you want to use for each migration, and how long {VirtProductName} attempts to complete the migration before canceling the process. Configure these settings in the `HyperConverged` custom resource (CR). +[role="_abstract"] +You can adjust your cluster-wide live migration settings based on the type of workload and migration scenario. + +This enables you to control how many VMs migrate at the same time, the network bandwidth you want to use for each migration, and how long {VirtProductName} attempts to complete the migration before canceling the process. Configure these settings in the `HyperConverged` custom resource (CR). If you are migrating multiple VMs per node at the same time, set a `bandwidthPerMigration` limit to prevent a large or busy VM from using a large portion of the node’s network bandwidth. By default, the `bandwidthPerMigration` value is `0`, which means unlimited. diff --git a/modules/virt-vm-rdp-console-web.adoc b/modules/virt-vm-rdp-console-web.adoc index 678130028e83..8f05df1c3f51 100644 --- a/modules/virt-vm-rdp-console-web.adoc +++ b/modules/virt-vm-rdp-console-web.adoc @@ -6,6 +6,7 @@ [id="virt-vm-rdp-console-web_{context}"] = Connecting to a Windows virtual machine with RDP +[role="_abstract"] The *Desktop viewer* console, which utilizes the Remote Desktop Protocol (RDP), provides a better console experience for connecting to Windows virtual machines. To connect to a Windows virtual machine with RDP, download the `console.rdp` file for the virtual machine from the *Console* tab on the *VirtualMachine details* page of the web console and supply it to your preferred RDP client. diff --git a/modules/virt-vm-serial-console-web.adoc b/modules/virt-vm-serial-console-web.adoc index a11a652eac04..d0b10050559c 100644 --- a/modules/virt-vm-serial-console-web.adoc +++ b/modules/virt-vm-serial-console-web.adoc @@ -6,7 +6,8 @@ [id="virt-vm-serial-console-web_{context}"] = Connecting to the serial console -Connect to the serial console of a running virtual machine from the *Console* +[role="_abstract"] +You can connect to the serial console of a running virtual machine from the *Console* tab on the *VirtualMachine details* page of the web console. .Procedure diff --git a/modules/virt-vmware-comparison.adoc b/modules/virt-vmware-comparison.adoc index c06b5aed5fd9..04f0ed32dbb7 100644 --- a/modules/virt-vmware-comparison.adoc +++ b/modules/virt-vmware-comparison.adoc @@ -6,7 +6,10 @@ [id="virt-vmware-comparison_{context}"] = Comparing {VirtProductName} to {vmw-full} -If you are familiar with {vmw-first}, the following table lists {VirtProductName} components that you can use to accomplish similar tasks. However, because {VirtProductName} is conceptually different from {vmw-short}, and much of its functionality comes from the underlying {product-title}, {VirtProductName} does not have direct alternatives for all {vmw-short} concepts or components. +[role="_abstract"] +If you are familiar with {vmw-first}, the following table lists {VirtProductName} components that you can use to accomplish similar tasks. + +However, because {VirtProductName} is conceptually different from {vmw-short}, and much of its functionality comes from the underlying {product-title}, {VirtProductName} does not have direct alternatives for all {vmw-short} concepts or components. .Mapping of {vmw-short} concepts to their closest {VirtProductName} counterparts [options="header"] @@ -15,27 +18,36 @@ If you are familiar with {vmw-first}, the following table lists {VirtProductName |{vmw-short} concept |{VirtProductName} |Explanation |Datastore -|Persistent volume (PV){nbsp}+ + +a|Persistent volume (PV) + Persistent volume claim (PVC) + |Stores VM disks. A PV represents existing storage and is attached to a VM through a PVC. When created with the `ReadWriteMany` (RWX) access mode, PVCs can be mounted by multiple VMs simultaneously. |Dynamic Resource Scheduling (DRS) -|Pod eviction policy{nbsp}+ + +a|Pod eviction policy + Descheduler + |Provides active resource balancing. A combination of pod eviction policies and a descheduler allows VMs to be live migrated to more appropriate nodes to keep node resource utilization manageable. |NSX -|Multus{nbsp}+ + -OVN-Kubernetes{nbsp}+ + +a|Multus + +OVN-Kubernetes + Third-party container network interface (CNI) plug-ins + |Provides an overlay network configuration. There is no direct equivalent for NSX in {VirtProductName}, but you can use the OVN-Kubernetes network provider or install certified third-party CNI plug-ins. |Storage Policy Based Management (SPBM) |Storage class |Provides policy-based storage selection. Storage classes represent various storage types and describe storage capabilities, such as quality of service, backup policy, reclaim policy, and whether volume expansion is allowed. A PVC can request a specific storage class to satisfy application requirements. -|vCenter + +a|vCenter + vRealize Operations + |OpenShift Metrics and Monitoring |Provides host and VM metrics. You can view metrics and monitor the overall health of the cluster and VMs by using the {product-title} web console. @@ -43,9 +55,13 @@ vRealize Operations |Live migration |Moves a running VM to another node without interruption. For live migration to be available, the PVC attached to the VM must have the `ReadWriteMany` (RWX) access mode. -|vSwitch + +a|vSwitch + DvSwitch -|NMState Operator{nbsp}+ + + +a|NMState Operator + Multus + |Provides a physical network configuration. You can use the NMState Operator to apply state-driven network configuration and manage various network interface types, including Linux bridges and network bonds. With Multus, you can attach multiple network interfaces and connect VMs to external networks. |=== diff --git a/modules/virt-wasp-agent-pod-eviction.adoc b/modules/virt-wasp-agent-pod-eviction.adoc index 8fac4c70275c..f2ca670ea1f8 100644 --- a/modules/virt-wasp-agent-pod-eviction.adoc +++ b/modules/virt-wasp-agent-pod-eviction.adoc @@ -6,13 +6,15 @@ [id="virt-wasp-agent-pod-eviction_{context}"] = Pod eviction conditions used by wasp-agent +[role="_abstract"] The wasp agent manages pod eviction when the system is heavily loaded and nodes are at risk. Eviction is triggered if one of the following conditions is met: High swap I/O traffic:: This condition is met when swap-related I/O traffic is excessively high. + -.Condition +Condition: ++ [source,text] ---- averageSwapInPerSecond > maxAverageSwapInPagesPerSecond @@ -26,7 +28,8 @@ High swap utilization:: This condition is met when swap utilization is excessively high, causing the current virtual memory usage to exceed the factored threshold. The `NODE_SWAP_SPACE` setting in your `MachineConfig` object can impact this condition. + -.Condition +Condition: ++ [source,text] ---- nodeWorkingSet + nodeSwapUsage < totalNodeMemory + totalSwapMemory × thresholdFactor @@ -48,4 +51,4 @@ You can use the following environment variables to adjust the values used to cal |Sets the `thresholdFactor` value used to calculate high swap utilization. |`AVERAGE_WINDOW_SIZE_SECONDS` |Sets the time interval for calculating the average swap usage. -|=== \ No newline at end of file +|=== diff --git a/modules/virt-what-you-can-do-with-virt.adoc b/modules/virt-what-you-can-do-with-virt.adoc index ba7f298f3c6b..1a9e7d386073 100644 --- a/modules/virt-what-you-can-do-with-virt.adoc +++ b/modules/virt-what-you-can-do-with-virt.adoc @@ -7,9 +7,11 @@ = What you can do with {VirtProductName} ifndef::openshift-origin[] +[role="_abstract"] {VirtProductName} provides the scalable, enterprise-grade virtualization functionality in Red{nbsp}Hat OpenShift. endif::[] ifdef::openshift-origin[] +[role="_abstract"] {VirtProductName} provides the scalable, enterprise-grade virtualization functionality in {product-title}. endif::[] You can use it to manage virtual machines (VMs) exclusively or alongside container workloads. diff --git a/virt/managing_vms/virt-edit-vms.adoc b/virt/managing_vms/virt-edit-vms.adoc index 006c6c5b2897..07dae1184524 100644 --- a/virt/managing_vms/virt-edit-vms.adoc +++ b/virt/managing_vms/virt-edit-vms.adoc @@ -32,7 +32,7 @@ include::modules/virt-adding-secret-configmap-service-account-to-vm.adoc[levelof include::modules/virt-updating-multiple-vms.adoc[leveloffset=+1] -include::modules/virt-configure-multiple-iothreads.adoc[leveloffsent=+1] +include::modules/virt-configure-multiple-iothreads.adoc[leveloffset=+1] [discrete] [id="additional-resources-configmaps"]