Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
3 changes: 2 additions & 1 deletion modules/virt-NUMA-prereqs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@ Before you can enable NUMA functionality with {VirtProductName} VMs, you must en
* Worker nodes must have huge pages enabled.
* The `KubeletConfig` object on worker nodes must be configured with the `cpuManagerPolicy: static` spec to guarantee dedicated CPU allocation, which is a prerequisite for NUMA pinning.
+
.Example `cpuManagerPolicy: static` spec
Example `cpuManagerPolicy: static` spec:
+
[source,yaml]
----
apiVersion: machineconfiguration.openshift.io/v1
Expand Down
7 changes: 5 additions & 2 deletions modules/virt-about-aaq-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
[id="virt-about-aaq-operator_{context}"]
= About the AAQ Operator

[role="_abstract"]
The Application-Aware Quota (AAQ) Operator provides more flexible and extensible quota management compared to the native `ResourceQuota` object in the {product-title} platform.

In a multi-tenant cluster environment, where multiple workloads operate on shared infrastructure and resources, using the Kubernetes native `ResourceQuota` object to limit aggregate CPU and memory consumption presents infrastructure overhead and live migration challenges for {VirtProductName} workloads.
Expand All @@ -21,7 +22,8 @@ The AAQ Operator introduces two new API objects defined as custom resource defin

* `ApplicationAwareResourceQuota`: Sets aggregate quota restrictions enforced per namespace. The `ApplicationAwareResourceQuota` API is compatible with the native `ResourceQuota` object and shares the same specification and status definitions.
+
.Example manifest
Example manifest:
+
[source,yaml]
----
apiVersion: aaq.kubevirt.io/v1alpha1
Expand All @@ -41,7 +43,8 @@ spec:

* `ApplicationAwareClusterResourceQuota`: Mirrors the `ApplicationAwareResourceQuota` object at a cluster scope. It is compatible with the native `ClusterResourceQuota` API object and shares the same specification and status definitions. When creating an AAQ cluster quota, you can select multiple namespaces based on annotation selection, label selection, or both by editing the `spec.selector.labels` or `spec.selector.annotations` fields.
+
.Example manifest
Example manifest:
+
[source,yaml]
----
apiVersion: aaq.kubevirt.io/v1alpha1
Expand Down
5 changes: 3 additions & 2 deletions modules/virt-about-application-consistent-backups.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,9 @@
[id="virt-about-application-consistent-backups_{context}"]
= About application-consistent snapshots and backups

You can configure application-consistent snapshots and backups for Linux or Windows virtual machines (VMs) through a cycle of freezing and thawing. For any application, you can either configure a script on a Linux VM or register on a Windows VM to be notified when a snapshot or backup is due to begin.
[role="_abstract"]
You can configure application-consistent snapshots and backups for Linux or Windows virtual machines (VMs) through a cycle of freezing and thawing. For any application, you can configure a script on a Linux VM or register on a Windows VM to be notified when a snapshot or backup is due to begin.

On a Linux VM, freeze and thaw processes trigger automatically when a snapshot is taken or a backup is started by using, for example, a plugin from Velero or another backup vendor. The freeze process, performed by QEMU Guest Agent (QEMU GA) freeze hooks, ensures that before the snapshot or backup of a VM occurs, all of the VM's filesystems are frozen and each appropriately configured application is informed that a snapshot or backup is about to start. This notification affords each application the opportunity to quiesce its state. Depending on the application, quiescing might involve temporarily refusing new requests, finishing in-progress operations, and flushing data to disk. The operating system is then directed to quiesce the filesystems by flushing outstanding writes to disk and freezing new write activity. All new connection requests are refused. When all applications have become inactive, the QEMU GA freezes the filesystems, and a snapshot is taken or a backup initiated. After the taking of the snapshot or start of the backup, the thawing process begins. Filesystems writing is reactivated and applications receive notification to resume normal operations.

The same cycle of freezing and thawing is available on a Windows VM. Applications register with the Volume Shadow Copy Service (VSS) to receive notifications that they should flush out their data because a backup or snapshot is imminent. Thawing of the applications after the backup or snapshot is complete returns them to an active state. For more details, see the Windows Server documentation about the Volume Shadow Copy Service.
The same cycle of freezing and thawing is available on a Windows VM. Applications register with the Volume Shadow Copy Service (VSS) to receive notifications that they should flush out their data because a backup or snapshot is imminent. Thawing of the applications after the backup or snapshot is complete returns them to an active state. For more details, see the Windows Server documentation about the Volume Shadow Copy Service.
7 changes: 4 additions & 3 deletions modules/virt-about-auto-bootsource-updates.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,13 @@
[id="virt-about-auto-bootsource-updates_{context}"]
= About automatic boot source updates

Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates the _system-defined_ boot sources that {VirtProductName} provides.
[role="_abstract"]
Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs.

You can opt out of automatic updates for all system-defined boot sources by disabling the `enableCommonBootImageImport` feature gate. If you disable this feature gate, all `DataImportCron` objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually.
By default, CDI automatically updates the _system-defined_ boot sources that {VirtProductName} provides. You can opt out of automatic updates for all system-defined boot sources by disabling the `enableCommonBootImageImport` feature gate. If you disable this feature gate, all `DataImportCron` objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually.

When the `enableCommonBootImageImport` feature gate is disabled, `DataSource` objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by populating a PVC with an operating system, optionally creating a volume snapshot from the PVC, and then referring to the PVC or volume snapshot from the `DataSource` object.

_Custom_ boot sources that are not provided by {VirtProductName} are not controlled by the feature gate. You must manage them individually by editing the `HyperConverged` custom resource (CR). You can also use this method to manage individual system-defined boot sources.

Cluster administrators can enable automatic subscription for {op-system-base-full} virtual machines in the {product-title} web console.
Cluster administrators can enable automatic subscription for {op-system-base-full} virtual machines in the {product-title} web console.
1 change: 1 addition & 0 deletions modules/virt-about-block-pvs.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
[id="virt-about-block-pvs_{context}"]
= About block persistent volumes

[role="_abstract"]
A block persistent volume (PV) is a PV that is backed by a raw block device. These volumes
do not have a file system and can provide performance benefits for
virtual machines by reducing overhead.
Expand Down
1 change: 1 addition & 0 deletions modules/virt-about-cdi-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
[id="virt-about-cdi-operator_{context}"]
= About the Containerized Data Importer (CDI) Operator

[role="_abstract"]
The CDI Operator, `cdi-operator`, manages CDI and its related resources, which imports a virtual machine (VM) image into a persistent volume claim (PVC) by using a data volume.

image::cnv_components_cdi-operator.png[cdi-operator components]
Expand Down
5 changes: 4 additions & 1 deletion modules/virt-about-changing-removing-mediated-devices.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@
[id="about-changing-removing-mediated-devices_{context}"]
= About changing and removing mediated devices

[role="_abstract"]
As an administrator, you can change or remove mediated devices by editing the `HyperConverged` custom resource (CR).

You can reconfigure or remove mediated devices in several ways:

* Edit the `HyperConverged` CR and change the contents of the `mediatedDeviceTypes` stanza.
Expand All @@ -17,4 +20,4 @@ You can reconfigure or remove mediated devices in several ways:
[NOTE]
====
If you remove the device information from the `spec.permittedHostDevices` stanza without also removing it from the `spec.mediatedDevicesConfiguration` stanza, you cannot create a new mediated device type on the same node. To properly remove mediated devices, remove the device information from both stanzas.
====
====
12 changes: 5 additions & 7 deletions modules/virt-about-cloning.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,10 @@
[id="virt-about-cloning_{context}"]
= About cloning

When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the following Container Storage Interface (CSI) clone methods:
[role="_abstract"]
When cloning a data volume, the Containerized Data Importer (CDI) chooses one of the Container Storage Interface (CSI) clone methods: CSI volume cloning or smart cloning. Both methods are efficient but have certain requirements. If the requirements are not met, the CDI uses host-assisted cloning.

* CSI volume cloning
* Smart cloning

Both CSI volume cloning and smart cloning methods are efficient, but they have certain requirements for use. If the requirements are not met, the CDI uses host-assisted cloning. Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods.
Host-assisted cloning is the slowest and least efficient method of cloning, but it has fewer requirements than either of the other two cloning methods.

[id="csi-volume-cloning_{context}"]
== CSI volume cloning
Expand Down Expand Up @@ -47,7 +45,7 @@ When the requirements for neither Container Storage Interface (CSI) volume cloni

Host-assisted cloning uses a source pod and a target pod to copy data from the source volume to the target volume. The target persistent volume claim (PVC) is annotated with the fallback reason that explains why host-assisted cloning has been used, and an event is created.

.Example PVC target annotation
Example PVC target annotation:

[source,yaml]
----
Expand All @@ -60,7 +58,7 @@ metadata:
cdi.kubevirt.io/cloneType: copy
----

.Example event
Example event:

[source,terminal]
----
Expand Down
1 change: 1 addition & 0 deletions modules/virt-about-cluster-network-addons-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
[id="virt-about-cluster-network-addons-operator_{context}"]
= About the Cluster Network Addons Operator

[role="_abstract"]
The Cluster Network Addons Operator, `cluster-network-addons-operator`, deploys networking components on a cluster and manages the related resources for extended network functionality.

image::cnv_components_cluster-network-addons-operator.png[cluster-network-addons-operator components]
Expand Down
7 changes: 5 additions & 2 deletions modules/virt-about-control-plane-only-updates.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,10 @@
[id="virt-about-control-plane-only-updates_{context}"]
= Control Plane Only updates

Every even-numbered minor version of {product-title} is an Extended Update Support (EUS) version. However, Kubernetes design mandates serial minor version updates, so you cannot directly update from one EUS version to the next. An EUS-to-EUS update starts with updating {VirtProductName} to the latest z-stream of the next odd-numbered minor version. Next, update {product-title} to the target EUS version. When the {product-title} update succeeds, the corresponding update for {VirtProductName} becomes available. You can now update {VirtProductName} to the target EUS version.
[role="_abstract"]
Every even-numbered minor version of {product-title} is an Extended Update Support (EUS) version. However, Kubernetes design mandates serial minor version updates, so you cannot directly update from one EUS version to the next.

An EUS-to-EUS update starts with updating {VirtProductName} to the latest z-stream of the next odd-numbered minor version. Next, update {product-title} to the target EUS version. When the {product-title} update succeeds, the corresponding update for {VirtProductName} becomes available. You can now update {VirtProductName} to the target EUS version.

[NOTE]
====
Expand All @@ -29,4 +32,4 @@ Before beginning a Control Plane Only update, you must:
By default, {VirtProductName} automatically updates workloads, such as the `virt-launcher` pod, when you update the {VirtProductName} Operator. You can configure this behavior in the `spec.workloadUpdateStrategy` stanza of the `HyperConverged` custom resource.
====

// link to EUS to EUS docs in assembly due to module limitations
// link to EUS to EUS docs in assembly due to module limitations
3 changes: 2 additions & 1 deletion modules/virt-about-cpu-and-memory-quota-namespace.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
[id="virt-about-cpu-and-memory-quota-namespace_{context}"]
= About CPU and memory quotas in a namespace

[role="_abstract"]
A _resource quota_, defined by the `ResourceQuota` object, imposes restrictions on a namespace that limit the total amount of compute resources that can be consumed by resources within that namespace.

The `HyperConverged` custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of `0`. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.
The `HyperConverged` custom resource (CR) defines the user configuration for the Containerized Data Importer (CDI). The CPU and memory request and limit values are set to a default value of `0`. This ensures that pods created by CDI that do not specify compute resource requirements are given the default values and are allowed to run in a namespace that is restricted with a quota.
3 changes: 2 additions & 1 deletion modules/virt-about-creating-storage-classes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
[id="virt-about-creating-storage-classes_{context}"]
= About creating storage classes

[role="_abstract"]
When you create a storage class, you set parameters that affect the dynamic provisioning of persistent volumes (PVs) that belong to that storage class. You cannot update a `StorageClass` object's parameters after you create it.

In order to use the hostpath provisioner (HPP) you must create an associated storage class for the CSI driver with the `storagePools` stanza.
Expand All @@ -15,4 +16,4 @@ In order to use the hostpath provisioner (HPP) you must create an associated sto
Virtual machines use data volumes that are based on local PVs. Local PVs are bound to specific nodes. While the disk image is prepared for consumption by the virtual machine, it is possible that the virtual machine cannot be scheduled to the node where the local storage PV was previously pinned.

To solve this problem, use the Kubernetes pod scheduler to bind the persistent volume claim (PVC) to a PV on the correct node. By using the `StorageClass` value with `volumeBindingMode` parameter set to `WaitForFirstConsumer`, the binding and provisioning of the PV is delayed until a pod is created using the PVC.
====
====
5 changes: 4 additions & 1 deletion modules/virt-about-datavolumes.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,10 @@
[id="virt-about-datavolumes_{context}"]
= About data volumes

`DataVolume` objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). You can create a data volume as either a standalone resource or by using the `dataVolumeTemplate` field in the virtual machine (VM) specification.
[role="_abstract"]
`DataVolume` objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC).

You can create a data volume as either a standalone resource or by using the `dataVolumeTemplate` field in the virtual machine (VM) specification.

[NOTE]
====
Expand Down
5 changes: 4 additions & 1 deletion modules/virt-about-dedicated-resources.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,10 @@

= About dedicated resources

[role="_abstract"]
When you enable dedicated resources for your virtual machine, your virtual
machine's workload is scheduled on CPUs that will not be used by other
processes. By using dedicated resources, you can improve the performance of the
processes.

By using dedicated resources, you can improve the performance of the
virtual machine and the accuracy of latency predictions.
7 changes: 4 additions & 3 deletions modules/virt-about-dr-methods.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,11 @@
[id="virt-about-dr-methods_{context}"]
= About disaster recovery methods

For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the link:https://access.redhat.com/articles/7041594[Red{nbsp}Hat {VirtProductName} disaster recovery guide] in the Red{nbsp}Hat Knowledgebase.

[role="_abstract"]
The two primary DR methods for {VirtProductName} are Metropolitan Disaster Recovery (Metro-DR) and Regional-DR.

For an overview of disaster recovery (DR) concepts, architecture, and planning considerations, see the link:https://access.redhat.com/articles/7041594[Red{nbsp}Hat {VirtProductName} disaster recovery guide] in the Red{nbsp}Hat Knowledgebase.

[id="metro-dr_{context}"]
== Metro-DR

Expand All @@ -18,4 +19,4 @@ Metro-DR uses synchronous replication. It writes to storage at both the primary
[id="regional-dr_{context}"]
== Regional-DR

Regional-DR uses asynchronous replication. The data in the primary site is synchronized with the secondary site at regular intervals. For this type of replication, you can have a higher latency connection between the primary and secondary sites.
Regional-DR uses asynchronous replication. The data in the primary site is synchronized with the secondary site at regular intervals. For this type of replication, you can have a higher latency connection between the primary and secondary sites.
6 changes: 4 additions & 2 deletions modules/virt-about-dv-conditions-and-events.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,10 @@
[id="virt-about-dv-conditions-and-events_{context}"]
= About data volume conditions and events

You can diagnose data volume issues by examining the output of the `Conditions` and `Events` sections
generated by the command:
[role="_abstract"]
You can diagnose data volume issues by examining the `Conditions` and `Events` sections of the `oc describe` command output.

Run the following command to inspect the data volume:

[source,terminal]
----
Expand Down
3 changes: 2 additions & 1 deletion modules/virt-about-fusion-access-san.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@
[id="about-fusion-access-san_{context}"]
= About {IBMFusionFirst}

{IBMFusionFirst} is a solution that provides a scalable clustered file system for enterprise storage, primarily designed to offer access to consolidated, block-level data storage. It presents storage devices, such as disk arrays, to the operating system as if they were direct-attached storage.
[role="_abstract"]
{IBMFusionFirst} provides a scalable clustered file system for enterprise storage, primarily designed to offer access to consolidated, block-level data storage. It presents storage devices, such as disk arrays, to the operating system as if they were direct-attached storage.

This solution is particularly geared towards enterprise storage for {VirtProductName} and leverages existing Storage Area Network (SAN) infrastructure. A SAN is a dedicated network of storage devices that is typically not accessible through the local area network (LAN).

Expand Down
1 change: 1 addition & 0 deletions modules/virt-about-hco-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
[id="virt-about-hco-operator_{context}"]
= About the HyperConverged Operator (HCO)

[role="_abstract"]
The HCO, `hco-operator`, provides a single entry point for deploying and managing {VirtProductName} and several helper operators with opinionated defaults. It also creates custom resources (CRs) for those operators.

image::cnv_components_hco-operator.png[hco-operator components]
Expand Down
1 change: 1 addition & 0 deletions modules/virt-about-hpp-operator.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@
[id="virt-about-hpp-operator_{context}"]
= About the Hostpath Provisioner (HPP) Operator

[role="_abstract"]
The HPP Operator, `hostpath-provisioner-operator`, deploys and manages the multi-node HPP and related resources.

image::cnv_components_hpp-operator.png[hpp-operator components]
Expand Down
Loading