Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CNV 2.1 release notes #16756

Merged
merged 1 commit into from Oct 24, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions _topic_map.yml
Expand Up @@ -1200,10 +1200,10 @@ Topics:
# File: cnv-openshift-cluster-monitoring
# - Name: Collecting container-native virtualization data for Red Hat Support
# File: cnv-collecting-cnv-data
#- Name: Container-native virtualization 2.0 release notes
#- Name: Container-native virtualization 2.1 release notes
# Dir: cnv_release_notes
# Topics:
# - Name: Container-native virtualization 2.0 release notes
# - Name: Container-native virtualization 2.1 release notes
# File: cnv-release-notes
---
Name: Serverless applications
Expand Down
210 changes: 157 additions & 53 deletions cnv/cnv_release_notes/cnv-release-notes.adoc
Expand Up @@ -16,62 +16,125 @@ include::modules/technology-preview.adoc[leveloffset=+2]

== New and changed features

=== Supported binding methods
* Open vSwitch (OVS) is no longer recommended and should not be used
in {CNVProductName} 2.0.
* For the default Pod network, `masquerade` is the only recommended binding
method. It is not supported for non-default networks.
* For secondary networks, use the `bridge` binding method.

=== Web console improvements
* You can now view all services associated with a virtual machine in the
*Virtual Machine Details* screen.


== Resolved issues
* The {product-title} dashboard captures high-level information about clusters.
From the {product-title} web console, access the dashboard by clicking
*Home -> Dashboards -> Overview*. Note that virtual machines are no longer listed
in the web console project overview. Virtual machines are now listed within the
*Cluster Inventory* dashboard card.

* Deleting a PVC after a CDI import fails no longer results in the importer Pod
getting stuck in a `CrashLoopBackOff` state. The PVC is now deleted normally.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1673683[*BZ#1673683*])
=== Other improvements

* After you install {CNVProductName}, MAC pool manager automatically starts.
If you define a secondary NIC without specifying the MAC address, the MAC pool
manager allocates a unique MAC address to the NIC.
+
[NOTE]
====
If you define a secondary NIC with a specific MAC address, it is possible
that the MAC address might conflict with another NIC in the cluster.
====

== Known issues

* Some KubeVirt resources are improperly retained when removing {CNVProductName}.
As a workaround, you must manually remove them by running the command
`oc delete apiservices v1alpha3.subresources.kubevirt.io -n openshift-cnv`.
These resources will be removed automatically after
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1712429[*BZ#1712429*]) is
resolved.

* When using an older version of `virtctl` with {CNVProductName} 2.0, `virtctl`
cannot connect to the requested virtual machine. On the client, update the
`virtctl` RPM package to the latest version to resolve this issue.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1706108[*BZ#1706108*])
== Resolved issues

* Interfaces connected to the default Pod network lose connectivity after
live migration. As a workaround, use an additional `multus`-backed network.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1693532[*BZ#1693532*])
* Previously, if you used the web console to create a virtual machine template
that had the same name as an existing virtual machine, the operation failed.
This resulted in the message `Name is already used by another virtual machine`.
This issue is fixed in {CNVProductName} 2.1.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1717802[*BZ#1717802*])

* {CNVProductNameStart} cannot reliably identify node drains that are triggered by
running either `oc adm drain` or `kubectl drain`. Do not run these commands on
the nodes of any clusters where {CNVProductName} is deployed. The nodes might not
drain if there are virtual machines running on top of them.
The current solution is to put nodes into maintenance.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1707427[*BZ#1707427*])

* If you create a virtual machine with the Pod network connected in `bridge`
mode and use a `cloud-init` disk, the virtual machine will lose its network
connectivity after being restarted. As a workaround, remove the `HWADDR` line
in the file `/etc/sysconfig/network-scripts/ifcfg-eth0`.
* Previously, if you created a virtual machine with the Pod network connected in
`bridge` mode and used a `cloud-init` disk, the virtual machine lost its network
connectivity after being restarted. This issue is fixed in {CNVProductName} 2.1.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1708680[*BZ#1708680*])

* Masquerade does not currently work with CNV. Due to an upstream issue,
you cannot connect a virtual machine to the default Pod network while in Masquerade mode.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1725848[*BZ#1725848*])

* Creating a NIC with Masquerade in the wizard does not allow you to specify the `port` option.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1725848[*BZ#1725848*])
== Known issues

* When adding a disk to a virtual machine via the *Disks* tab in the web console,
the added disk always has a `Filesystem` volumeMode, regardless of the volumeMode
set in the `kubevirt-storage-class-default` ConfigMap.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1753688[*BZ#1753688*])

* After migration, a virtual machine is assigned a new IP address. However, the
commands `oc get vmi` and `oc describe vmi` still generate output containing the
obsolete IP address. (link:https://bugzilla.redhat.com/show_bug.cgi?id=1686208[*BZ#1686208*])
+
** As a workaround, view the correct IP address by running the following command:
+
----
$ oc get pod -o wide
----

* The virtual machines wizard does not load for users without administrator
privileges. This issue is caused by missing permissions that allow users to load
network attachment definitions.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1743985[*BZ#1743985*])
+
** As a workaround, provide the user with permissions to load the network attachment
definitions.
+
. Define `ClusterRole` and `ClusterRoleBinding` objects to the YAML configuration
file, using the following examples:
+
bobfuru marked this conversation as resolved.
Show resolved Hide resolved
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cni-resources
rules:
- apiGroups: ["k8s.cni.cncf.io"]
resources: ["*"]
verbs: ["*"]
----
+
[source,yaml]
----
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: <role-binding-name>
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cni-resources
subjects:
- kind: User
name: <user to grant the role to>
namespace: <namespace of the user>
----
+
. As a `cluster-admin` user, run the following command to create the `ClusterRole`
and `ClusterRoleBinding` objects you defined:
+
----
$ oc create -f <filename>.yaml
----

* When navigating to the *Virtual Machines Console* tab, sometimes no content
is displayed. As a workaround, use the serial console.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1753606[*BZ#1753606*])

* When you attempt to list all instances of the {CNVProductName} operator from a
browser, you receive a 404 (page not found) error.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1757526[*BZ#1757526*])
+
** As a workaround, run the following command:
+
----
$ oc get pods -n openshift-cnv | grep operator
----

* Some resources are improperly retained when removing {CNVProductName}. You
must manually remove these resources in order to reinstall {CNVProductName}.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1712429[*BZ#1712429*]),
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1757705[*BZ#1757705*])
+
** As a workaround, follow this procedure:
link:https://access.redhat.com/articles/4484961[Removing leftover resources from container-native virtualization 2.1 uninstallation]
bobfuru marked this conversation as resolved.
Show resolved Hide resolved

* If a virtual machine uses guaranteed CPUs, it will not be scheduled, because
the label `cpumanager=true` is not automatically set on nodes. As a
Expand All @@ -80,12 +143,53 @@ Then, manually label the nodes with `cpumanager=true` before running virtual
machines with guaranteed CPUs on your cluster.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1718944[*BZ#1718944*])

* If you use the web console to create a virtual machine template that has the same name as an existing
virtual machine, the operation fails and the message `Name is already used by another virtual machine` is displayed.
As a workaround, create the template from the command line.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1717930[*BZ#1717930*])
* Live migration fails when nodes have different CPU models. Even in cases where
nodes have the same physical CPU model, differences introduced by microcode
updates have the same effect. This is because the default settings trigger
host CPU passthrough behavior, which is incompatible with live migration.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1760028[*BZ#1760028*])
+
** As a workaround, set the default CPU model in the `kubevirt-config` ConfigMap,
as shown in the following example:
+
[NOTE]
====
You must make this change before starting the virtual machines that support
live migration.
====
+
. Open the `kubevirt-config` ConfigMap for editing by running the following command:
+
----
$ oc edit configmap kubevirt-config -n openshift-cnv
----
+
. Edit the ConfigMap:
+
[source,yaml]
----
kind: ConfigMap
metadata:
name: kubevirt-config
data:
default-cpu-model: "<cpu-model>" <1>
----
<1> Replace `<cpu-model>` with the actual CPU model value. You can determine this
value by running `oc describe node <node>` for all nodes and looking at the
`cpu-model-<name>` labels. Select the CPU model that is present on all of your
nodes.

* The {CNVProductName} upgrade process occasionally fails due to an interruption
from the Operator Lifecycle Manager (OLM). This issue is caused by the limitations
associated with using a declarative API to track the state of {CNVProductName}
Operators. Enabling automatic updates during
xref:../cnv_install/installing-container-native-virtualization.adoc#cnv-subscribing-to-hco-catalog_installing-container-native-virtualization[installation]
decreases the risk of encountering this issue.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1759612[*BZ#1759612*])

* ReadWriteMany (RWX) is the only supported storage access mode for live migration,
importing VMware virtual machines, and creating virtual machines by using the
wizard.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1724654[*BZ#1724654*])
* {CNVProductNameStart} cannot reliably identify node drains that are triggered by
running either `oc adm drain` or `kubectl drain`. Do not run these commands on
the nodes of any clusters where {CNVProductName} is deployed. The nodes might not
drain if there are virtual machines running on top of them.
The current solution is to put nodes into maintenance.
(link:https://bugzilla.redhat.com/show_bug.cgi?id=1707427[*BZ#1707427*])
2 changes: 1 addition & 1 deletion modules/cnv-document-attributes.adoc
Expand Up @@ -26,4 +26,4 @@
//
:Install_BookName: Installing container-native virtualization
:Using_BookName: Using container-native virtualization
:RN_BookName: Container-native virtualization 2.0 release notes
:RN_BookName: Container-native virtualization 2.1 release notes