Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After update to RHEL 8.5 + latest virt:av from 8.5, ilibvirt IPI no longer works. #5401

Closed
ElCoyote27 opened this issue Nov 18, 2021 · 46 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ElCoyote27
Copy link

Hi,

I have been using the ansible-based ocp_libvirt_ipi role for some time on RHEL (7.8, then 7.9 and 8.2 and 8.3).
The role leverages this code from the source code of the openshift installer.

      shell: |
        cd {{ kvm_workdir }}/go/src/github.com/openshift/installer/

Ever since patching my RHEL 8.5 hypervisors to the latest libvirt* packages from 'virt:av' stream, ocp_libvirt_ipi has been unable to deploy successfully: Terraform works but the freshly installed set of masters is unable to spawn 'workers'.

Broken cluster looks like this:

[root@palanthas ~]# virsh list
 Id   Name                   State
--------------------------------------
 1    dc03                   running
 2    dc02                   running
 3    ocp4p-pmrl7-master-2   running
 5    ocp4p-pmrl7-master-0   running
 6    ocp4p-pmrl7-master-1   running

A working cluster looks like this (for me):

 Id   Name                         State
--------------------------------------------
 1    dc02                         running
 2    dc03                         running
 10   ocp4p-wtvsg-master-2         running
 11   ocp4p-wtvsg-master-0         running
 12   ocp4p-wtvsg-master-1         running
 13   ocp4p-wtvsg-worker-0-wv8ds   running
 14   ocp4p-wtvsg-worker-0-xbdrc   running
 15   ocp4p-wtvsg-worker-0-9trjv   running
 19   ocp4p-wtvsg-infra-0-52qwp    running
 20   ocp4p-wtvsg-infra-0-92mv5    running
 21   ocp4p-wtvsg-infra-0-lkmgc    running

I have reproduced this with the code fro OCP 4.6, 4.7 and 4.8 and the results are the same.

The issue started occuring when the libvirt packages on my RHEL 8.5 hypervisors were updated from:

  • 7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64
    to:
  • 7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64 (latest advanced virt rpms)
@ElCoyote27
Copy link
Author

The complete rpm transaction was:

[root@palanthas ~]# dnf history info 356
Updating Subscription Management repositories.
Transaction ID : 356
Begin time     : Thu 18 Nov 2021 12:21:51 PM EST
Begin rpmdb    : 4999:328dd7b5f10de0b2c201de806048107a50da2907
End time       : Thu 18 Nov 2021 12:23:08 PM EST (77 seconds)
End rpmdb      : 4996:91cae1eb8b81c74f247defa30892c12755b71ded
User           : root <root>
Return-Code    : Success
Releasever     : 8
Command Line   : update -y
Comment        :
Packages Altered:
    Install   qemu-kvm-hw-usbredir-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                      @advanced-virt-for-rhel-8-x86_64-rpms
    Upgrade   google-chrome-stable-96.0.4664.45-1.x86_64                                                 @krynn_Google_Chrome_google-chrome-rpms
    Upgraded  google-chrome-stable-95.0.4638.69-1.x86_64                                                 @@System
    Upgrade   qemu-kvm-block-rbd-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                        @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-block-rbd-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                     @@System
    Upgrade   libvirt-daemon-driver-storage-core-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64            @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-core-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64         @@System
    Upgrade   libvirt-daemon-driver-storage-iscsi-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64           @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-iscsi-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64        @@System
    Upgrade   libguestfs-1:1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64                                 @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libguestfs-1:1.44.0-2.module+el8.4.0+10146+75917d2f.x86_64                                 @@System
    Upgrade   qemu-kvm-docs-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                             @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-docs-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                          @@System
    Upgrade   python3-libvirt-7.6.0-1.module+el8.5.0+12098+85b3670b.x86_64                               @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  python3-libvirt-7.0.0-1.module+el8.4.0+9469+2eaf72bc.x86_64                                @@System
    Upgrade   libvirt-daemon-driver-network-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                 @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-network-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64              @@System
    Upgrade   qemu-kvm-block-iscsi-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                      @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-block-iscsi-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                   @@System
    Upgrade   libvirt-daemon-driver-storage-rbd-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64             @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-rbd-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64          @@System
    Upgrade   libguestfs-tools-c-1:1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64                         @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libguestfs-tools-c-1:1.44.0-2.module+el8.4.0+10146+75917d2f.x86_64                         @@System
    Upgrade   qemu-kvm-ui-opengl-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                        @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-ui-opengl-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                     @@System
    Upgrade   libvirt-daemon-driver-storage-iscsi-direct-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64    @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-iscsi-direct-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64 @@System
    Upgrade   qemu-kvm-core-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                             @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-core-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                          @@System
    Upgrade   hivex-1.3.18-22.module+el8.5.0+12087+2208d04c.x86_64                                       @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  hivex-1.3.18-20.module+el8.3.0+6423+e4cb6418.x86_64                                        @@System
    Obsoleted hivex-1.3.18-21.module+el8.4.0+11609+2eba841a.x86_64                                       @@System
    Upgrade   libvirt-daemon-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                                @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64                             @@System
    Obsoleted libvirt-bash-completion-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64                    @@System
    Upgrade   qemu-kvm-block-gluster-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                    @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-block-gluster-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                 @@System
    Upgrade   libvirt-daemon-driver-storage-gluster-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64         @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-gluster-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64      @@System
    Upgrade   libvirt-daemon-config-nwfilter-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-config-nwfilter-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64             @@System
    Upgrade   libvirt-daemon-kvm-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                            @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-kvm-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64                         @@System
    Upgrade   libvirt-devel-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                                 @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-devel-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64                              @@System
    Upgrade   qemu-img-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                                  @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-img-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                               @@System
    Upgrade   libvirt-daemon-driver-storage-mpath-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64           @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-mpath-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64        @@System
    Upgrade   libguestfs-java-devel-1:1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64                      @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libguestfs-java-devel-1:1.44.0-2.module+el8.4.0+10146+75917d2f.x86_64                      @@System
    Upgrade   qemu-kvm-ui-spice-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                         @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-ui-spice-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                      @@System
    Upgrade   libguestfs-gobject-devel-1:1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64                   @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libguestfs-gobject-devel-1:1.44.0-2.module+el8.4.0+10146+75917d2f.x86_64                   @@System
    Upgrade   hivex-devel-1.3.18-22.module+el8.5.0+12087+2208d04c.x86_64                                 @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  hivex-devel-1.3.18-21.module+el8.4.0+11609+2eba841a.x86_64                                 @@System
    Upgrade   libvirt-daemon-driver-nwfilter-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-nwfilter-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64             @@System
    Upgrade   libguestfs-tools-1:1.44.0-3.module+el8.5.0+10681+17a9b157.noarch                           @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libguestfs-tools-1:1.44.0-2.module+el8.4.0+10146+75917d2f.noarch                           @@System
    Upgrade   qemu-kvm-block-ssh-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                        @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-block-ssh-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                     @@System
    Upgrade   libvirt-daemon-driver-nodedev-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                 @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-nodedev-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64              @@System
    Upgrade   swtpm-libs-0.6.0-2.20210607gitea627b3.module+el8.5.0+12696+4ce1c6bc.x86_64                 @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  swtpm-libs-0.4.2-1.20201201git2df14e3.module+el8.4.0+9341+96cf2672.x86_64                  @@System
    Upgrade   swtpm-tools-0.6.0-2.20210607gitea627b3.module+el8.5.0+12696+4ce1c6bc.x86_64                @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  swtpm-tools-0.4.2-1.20201201git2df14e3.module+el8.4.0+9341+96cf2672.x86_64                 @@System
    Upgrade   perl-Sys-Guestfs-1:1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64                           @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  perl-Sys-Guestfs-1:1.44.0-2.module+el8.4.0+10146+75917d2f.x86_64                           @@System
    Upgrade   libvirt-daemon-config-network-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                 @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-config-network-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64              @@System
    Upgrade   libvirt-daemon-driver-secret-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                  @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-secret-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64               @@System
    Upgrade   perl-Sys-Virt-7.4.0-1.module+el8.5.0+11289+b29e262f.x86_64                                 @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  perl-Sys-Virt-7.0.0-1.module+el8.4.0+9469+2eaf72bc.x86_64                                  @@System
    Upgrade   swtpm-0.6.0-2.20210607gitea627b3.module+el8.5.0+12696+4ce1c6bc.x86_64                      @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  swtpm-0.4.0-3.20200828git0c238a2.el8.x86_64                                                @@System
    Obsoleted swtpm-0.4.2-1.20201201git2df14e3.module+el8.4.0+9341+96cf2672.x86_64                       @@System
    Upgrade   libguestfs-xfs-1:1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64                             @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libguestfs-xfs-1:1.44.0-2.module+el8.4.0+10146+75917d2f.x86_64                             @@System
    Upgrade   libvirt-libs-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                                  @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-libs-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64                               @@System
    Upgrade   qemu-guest-agent-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                          @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-guest-agent-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                       @@System
    Upgrade   libvirt-daemon-driver-storage-disk-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64            @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-disk-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64         @@System
    Upgrade   libguestfs-gobject-1:1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64                         @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libguestfs-gobject-1:1.44.0-2.module+el8.4.0+10146+75917d2f.x86_64                         @@System
    Upgrade   libvirt-client-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                                @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-client-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64                             @@System
    Upgrade   libguestfs-java-1:1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64                            @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libguestfs-java-1:1.44.0-2.module+el8.4.0+10146+75917d2f.x86_64                            @@System
    Upgrade   libvirt-docs-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                                  @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-docs-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64                               @@System
    Upgrade   libvirt-daemon-driver-storage-scsi-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64            @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-scsi-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64         @@System
    Upgrade   qemu-kvm-block-curl-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                       @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-block-curl-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                    @@System
    Upgrade   libvirt-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                                       @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64                                    @@System
    Upgrade   libvirt-daemon-driver-storage-logical-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64         @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-logical-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64      @@System
    Upgrade   qemu-kvm-common-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                           @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-common-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                        @@System
    Upgrade   perl-hivex-1.3.18-22.module+el8.5.0+12087+2208d04c.x86_64                                  @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  perl-hivex-1.3.18-20.module+el8.3.0+6423+e4cb6418.x86_64                                   @@System
    Obsoleted perl-hivex-1.3.18-21.module+el8.4.0+11609+2eba841a.x86_64                                  @@System
    Upgrade   libvirt-daemon-driver-interface-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64               @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-interface-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64            @@System
    Upgrade   libguestfs-devel-1:1.44.0-3.module+el8.5.0+10681+17a9b157.x86_64                           @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libguestfs-devel-1:1.44.0-2.module+el8.4.0+10146+75917d2f.x86_64                           @@System
    Upgrade   libvirt-daemon-driver-storage-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                 @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-storage-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64              @@System
    Upgrade   qemu-kvm-15:6.0.0-33.module+el8.5.0+13041+05be2dc6.x86_64                                  @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  qemu-kvm-15:5.2.0-16.module+el8.4.0+13043+9eb47245.11.x86_64                               @@System
    Upgrade   libvirt-daemon-driver-qemu-7.6.0-6.module+el8.5.0+13051+7ddbe958.x86_64                    @advanced-virt-for-rhel-8-x86_64-rpms
    Upgraded  libvirt-daemon-driver-qemu-7.0.0-14.5.module+el8.4.0+13026+f38c77ab.x86_64                 @@System
    Upgrade   docker-ce-rootless-extras-20.10.11-3.el8.x86_64                                            @krynn_Docker_CE_rhel8-docker-ce-rpms
    Upgraded  docker-ce-rootless-extras-20.10.10-3.el8.x86_64                                            @@System
    Upgrade   containerd.io-1.4.12-3.1.el8.x86_64                                                        @krynn_Docker_CE_rhel8-docker-ce-rpms
    Upgraded  containerd.io-1.4.11-3.1.el8.x86_64                                                        @@System
    Upgrade   docker-ce-3:20.10.11-3.el8.x86_64                                                          @krynn_Docker_CE_rhel8-docker-ce-rpms
    Upgraded  docker-ce-3:20.10.10-3.el8.x86_64                                                          @@System
    Upgrade   docker-ce-cli-1:20.10.11-3.el8.x86_64                                                      @krynn_Docker_CE_rhel8-docker-ce-rpms
    Upgraded  docker-ce-cli-1:20.10.10-3.el8.x86_64                                                      @@System
Scriptlet output:
   1 Couldn't write '1' to 'net/ipv4/tcp_tw_recycle', ignoring: No such file or directory

@ElCoyote27
Copy link
Author

In the system's log, the following messages are noticed (for workers):

Nov 18 13:01:35 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:01:36 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:01:38 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:01:40 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:01:42 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:01:43 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:01:46 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:01:47 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:01:49 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:01:51 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:01:53 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:01:54 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:01:57 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:01:58 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:02:00 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:02:01 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:02:04 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:02:05 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:02:07 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:02:09 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:02:11 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:02:12 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:02:16 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:02:18 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:02:27 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:02:28 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:02:48 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:02:49 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:03:29 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:03:30 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:04:51 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:04:52 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:07:35 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:07:36 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:10:22 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:10:23 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:13:03 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:13:04 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already
Nov 18 13:29:43 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-p2szs.ignition' exists already
Nov 18 13:29:45 palanthas libvirtd[53638]: storage volume 'ocp4p-pmrl7-worker-0-rvkqv.ignition' exists already

@ElCoyote27
Copy link
Author

this happens 100% when libvirt packages are updated from 7.0.0 to 7.6.0 (see versions above) and qemu-kvm was updated from 5.2.0 to 6.0.0

@ElCoyote27
Copy link
Author

@luisarizmendi for awareness

@ElCoyote27
Copy link
Author

the bootstrap and masters are launched by Terraform with the freshly rebuilt installer. This still works. It is only the workers (launched by the 3-master cluster talking to libvirt) that no longer launch successfully.

@ElCoyote27
Copy link
Author

ElCoyote27 commented Nov 19, 2021

On a fresh cluster which failed to launch the workers, I see this:

[root@daltigoth ~]# oc get events|grep worker
3h2m        Warning   FailedCreate             machine/ocp4d-nrgq6-worker-0-thlbw                  CreateError
178m        Warning   FailedCreate             machine/ocp4d-nrgq6-worker-0-thlbw                  CreateError
3m26s       Warning   FailedCreate             machine/ocp4d-nrgq6-worker-0-thlbw                  CreateError
3h2m        Warning   FailedCreate             machine/ocp4d-nrgq6-worker-0-xf8w4                  CreateError
178m        Warning   FailedCreate             machine/ocp4d-nrgq6-worker-0-xf8w4                  CreateError
3m32s       Warning   FailedCreate             machine/ocp4d-nrgq6-worker-0-xf8w4                  CreateError
3h2m        Warning   FailedCreate             machine/ocp4d-nrgq6-worker-0-zw4dx                  CreateError
178m        Warning   FailedCreate             machine/ocp4d-nrgq6-worker-0-zw4dx                  CreateError
3m28s       Warning   FailedCreate             machine/ocp4d-nrgq6-worker-0-zw4dx                  CreateError


@ElCoyote27
Copy link
Author

# oc logs  machine/ocp4d-nrgq6-worker-0-thlbw     
error: no kind "Machine" is registered for version "machine.openshift.io/v1beta1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"

@ElCoyote27
Copy link
Author

NAMESPACE               NAME                         PHASE          TYPE   REGION   ZONE   AGE
openshift-machine-api   ocp4d-nrgq6-master-0         Running                               3h37m
openshift-machine-api   ocp4d-nrgq6-master-1         Running                               3h37m
openshift-machine-api   ocp4d-nrgq6-master-2         Running                               3h37m
openshift-machine-api   ocp4d-nrgq6-worker-0-thlbw   Provisioning                          3h34m
openshift-machine-api   ocp4d-nrgq6-worker-0-xf8w4   Provisioning                          3h34m
openshift-machine-api   ocp4d-nrgq6-worker-0-zw4dx   Provisioning                          3h34m

And:

 Name                                  Path
--------------------------------------------------------------------------------------------------------------------------
 ocp4d-nrgq6-base                      /var/lib/libvirt/openshift-images/ocp4d-nrgq6/ocp4d-nrgq6-base
 ocp4d-nrgq6-master-0                  /var/lib/libvirt/openshift-images/ocp4d-nrgq6/ocp4d-nrgq6-master-0
 ocp4d-nrgq6-master-1                  /var/lib/libvirt/openshift-images/ocp4d-nrgq6/ocp4d-nrgq6-master-1
 ocp4d-nrgq6-master-2                  /var/lib/libvirt/openshift-images/ocp4d-nrgq6/ocp4d-nrgq6-master-2
 ocp4d-nrgq6-master.ign                /var/lib/libvirt/openshift-images/ocp4d-nrgq6/ocp4d-nrgq6-master.ign
 ocp4d-nrgq6-worker-0-thlbw.ignition   /var/lib/libvirt/openshift-images/ocp4d-nrgq6/ocp4d-nrgq6-worker-0-thlbw.ignition
 ocp4d-nrgq6-worker-0-xf8w4            /var/lib/libvirt/openshift-images/ocp4d-nrgq6/ocp4d-nrgq6-worker-0-xf8w4
 ocp4d-nrgq6-worker-0-xf8w4.ignition   /var/lib/libvirt/openshift-images/ocp4d-nrgq6/ocp4d-nrgq6-worker-0-xf8w4.ignition
 ocp4d-nrgq6-worker-0-zw4dx.ignition   /var/lib/libvirt/openshift-images/ocp4d-nrgq6/ocp4d-nrgq6-worker-0-zw4dx.ignition

And then:

{
  "ignition": {
    "config": {
      "merge": [
        {
          "source": "https://api-int.ocp4d.openshift.lasthome.solace.krynn:22623/config/worker"
        }
      ]
    },
    "security": {
      "tls": {
        "certificateAuthorities": [
          {
            "source": "data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFRENDQWZpZ0F3SUJBZ0lJTC9KNm5tUGx6Tk13RFFZSktvWklodmNOQVFFTEJRQXdKakVTTUJBR0ExVUUKQ3hNSmIzQmxibk5vYVdaME1SQXdEZ1lEVlFRR
EV3ZHliMjkwTFdOaE1CNFhEVEl4TVRFeE9URTJOREl3TUZvWApEVE14TVRFeE56RTJOREl3TUZvd0pqRVNNQkFHQTFVRUN4TUpiM0JsYm5Ob2FXWjBNUkF3RGdZRFZRUURFd2R5CmIyOTBMV05oTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF1SjU1RWxYUC82Uz
kKTTkvWWViUC9neGxQeEFlNFJVYzhpbGdDaWdKV0ZLZFp5QnhTOXQzQ1AyU0d5REx5S2FoT0pNRTNSSWZKK3FhbAo3YjI2ZEpUVXQ4a0daYUhpbTQyOGRHYXdiOTRzQjJWWWJUNGpPNGl3enM3Tm9udHdjY1NQQllRdk9XcmsxekhPCmtyQXRBcGFtbWp1bU1yTTRhNFRCMVZWVG5NQnA3eVZZcUV
kZ3l0Z29PZnA3VVN6aTVKK3p5VHhxKzdEN2FXejcKY0Z4RC9PN0x1ZFBMNEpkNVlnWGlIbjg5NHREdTFLQVl5RHVYYnNrZCtITUFFZGQycVFPZUZNNW1TZlpWQ3pZMwowSU1aemxMRDdvL0ZJOFRqS2hkN2NjSFZocks5YXVCdHhBbVREMjZBVUxQSEdQbWhyNHdqaVc0bWZZR3pyVmo0Cmtjb3JK
SEsyMndJREFRQUJvMEl3UURBT0JnTlZIUThCQWY4RUJBTUNBcVF3RHdZRFZSMFRBUUgvQkFVd0F3RUIKL3pBZEJnTlZIUTRFRmdRVW5PcTZMRC9ycXRxMlZWMnZtdjNwdFZQVmZ1OHdEUVlKS29aSWh2Y05BUUVMQlFBRApnZ0VCQUZ1WlRJTnVVR01KTDBZSkRDT3h1ZS8rbnM4QUhtR1l2dDA4Q
XZyNmJQQkNIZlJ0L1lBbHJHRzkzS1RVClZVUFJIeVdFVlNNSWU3bEt4bWlvMERHSFFMNDBYaWxuQjBaVExOdE5yUkVLN3JEM1M2NFRXTjd5YTNIYVhtQnUKSTFZNFFsUGFacUFqR3R1YmJveGY2N1NUaHFsL09IcVNGdkxzcUo2NFAwQW0yQ3hGb3N3N2VpSW9uMWJkNEErMgpsdlJqdlJQMWYxQ0
xrWTlTREJoRlVUVmwyTGNKMmlIUXVVc1cvNEJWM1owWmp1dmREbDFVWnRFZFVPUWxOdkliCnBITUFieXIwQXpxdWZwN2taODZkUHQzNm80dDJTeVpDY1VpY3RwTmZTYzhyWFZzbUU0S1NjcGdxSEd6KzFybVgKK255WmNDTE8rM0dwK2w5RTBjRWMyYTEyQ0lzPQotLS0tLUVORCBDRVJUSUZJQ0F
URS0tLS0tCg=="
          }
        ]
      }
    },
    "version": "3.2.0"
  }
}

@ElCoyote27
Copy link
Author

In the system's journal, I see these messages (didn't get those before when it worked):

{
  "ignition": {
    "config": {
      "merge": [
        {
          "source": "https://api-int.ocp4d.openshift.lasthome.solace.krynn:22623/config/worker"
        }
      ]
    },
    "security": {
      "tls": {
        "certificateAuthorities": [
          {
            "source": "data:text/plain;charset=utf-8;base64,LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURFRENDQWZpZ0F3SUJBZ0lJTC9KNm5tUGx6Tk13RFFZSktvWklodmNOQVFFTEJRQXdKakVTTUJBR0ExVUUKQ3hNSmIzQmxibk5vYVdaME1SQXdEZ1lEVlFRR
EV3ZHliMjkwTFdOaE1CNFhEVEl4TVRFeE9URTJOREl3TUZvWApEVE14TVRFeE56RTJOREl3TUZvd0pqRVNNQkFHQTFVRUN4TUpiM0JsYm5Ob2FXWjBNUkF3RGdZRFZRUURFd2R5CmIyOTBMV05oTUlJQklqQU5CZ2txaGtpRzl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUF1SjU1RWxYUC82Uz
kKTTkvWWViUC9neGxQeEFlNFJVYzhpbGdDaWdKV0ZLZFp5QnhTOXQzQ1AyU0d5REx5S2FoT0pNRTNSSWZKK3FhbAo3YjI2ZEpUVXQ4a0daYUhpbTQyOGRHYXdiOTRzQjJWWWJUNGpPNGl3enM3Tm9udHdjY1NQQllRdk9XcmsxekhPCmtyQXRBcGFtbWp1bU1yTTRhNFRCMVZWVG5NQnA3eVZZcUV
kZ3l0Z29PZnA3VVN6aTVKK3p5VHhxKzdEN2FXejcKY0Z4RC9PN0x1ZFBMNEpkNVlnWGlIbjg5NHREdTFLQVl5RHVYYnNrZCtITUFFZGQycVFPZUZNNW1TZlpWQ3pZMwowSU1aemxMRDdvL0ZJOFRqS2hkN2NjSFZocks5YXVCdHhBbVREMjZBVUxQSEdQbWhyNHdqaVc0bWZZR3pyVmo0Cmtjb3JK
SEsyMndJREFRQUJvMEl3UURBT0JnTlZIUThCQWY4RUJBTUNBcVF3RHdZRFZSMFRBUUgvQkFVd0F3RUIKL3pBZEJnTlZIUTRFRmdRVW5PcTZMRC9ycXRxMlZWMnZtdjNwdFZQVmZ1OHdEUVlKS29aSWh2Y05BUUVMQlFBRApnZ0VCQUZ1WlRJTnVVR01KTDBZSkRDT3h1ZS8rbnM4QUhtR1l2dDA4Q
XZyNmJQQkNIZlJ0L1lBbHJHRzkzS1RVClZVUFJIeVdFVlNNSWU3bEt4bWlvMERHSFFMNDBYaWxuQjBaVExOdE5yUkVLN3JEM1M2NFRXTjd5YTNIYVhtQnUKSTFZNFFsUGFacUFqR3R1YmJveGY2N1NUaHFsL09IcVNGdkxzcUo2NFAwQW0yQ3hGb3N3N2VpSW9uMWJkNEErMgpsdlJqdlJQMWYxQ0
xrWTlTREJoRlVUVmwyTGNKMmlIUXVVc1cvNEJWM1owWmp1dmREbDFVWnRFZFVPUWxOdkliCnBITUFieXIwQXpxdWZwN2taODZkUHQzNm80dDJTeVpDY1VpY3RwTmZTYzhyWFZzbUU0S1NjcGdxSEd6KzFybVgKK255WmNDTE8rM0dwK2w5RTBjRWMyYTEyQ0lzPQotLS0tLUVORCBDRVJUSUZJQ0F
URS0tLS0tCg=="
          }
        ]
      }
    },
    "version": "3.2.0"
  }
}

@staebler
Copy link
Contributor

Are there errors reported in the status of the worker Machines?

oc get machine -A -oyaml

@ElCoyote27
Copy link
Author

@staebler Yes, there are errors, let me get them to you..

This is really strange as I can set LIBVIRT_DEFAULT_URI to the same value as the one I set in my install-config and I can 'virsh start/stop/shutdown/whatever':

$ grep URI ocp-config/install-config-daltigoth.yaml 
    URI: qemu+tcp://172.21.122.1/system

virt-OCP]$ export LIBVIRT_DEFAULT_URI=qemu+tcp://172.21.122.1/system
virt-OCP]$ virsh list
 Id   Name       State
--------------------------
 1    idm        running
 2    vom7       running
 3    registry   running

@ElCoyote27
Copy link
Author

It first starts like this:

# oc get machines -A
NAMESPACE               NAME                         PHASE          TYPE   REGION   ZONE   AGE
openshift-machine-api   ocp4d-c5tvf-master-0         Running                               15m
openshift-machine-api   ocp4d-c5tvf-master-1         Running                               15m
openshift-machine-api   ocp4d-c5tvf-master-2         Running                               15m
openshift-machine-api   ocp4d-c5tvf-worker-0-d7dzc   Provisioning                          12m
openshift-machine-api   ocp4d-c5tvf-worker-0-s8m7x   Provisioning                          12m
openshift-machine-api   ocp4d-c5tvf-worker-0-skd9j   Provisioning                          12m

At that time, I am getting the following YAML


apiVersion: v1
items:
- apiVersion: machine.openshift.io/v1beta1
  kind: Machine
  metadata:
    creationTimestamp: "2021-11-25T21:20:19Z"
    finalizers:
    - machine.machine.openshift.io
    generation: 1
    labels:
      machine.openshift.io/cluster-api-cluster: ocp4d-c5tvf
      machine.openshift.io/cluster-api-machine-role: master
      machine.openshift.io/cluster-api-machine-type: master
    name: ocp4d-c5tvf-master-0
    namespace: openshift-machine-api
    resourceVersion: "23000"
    uid: 082eb65f-ef9d-4962-a94d-b5fa54dbd100
  spec:
    metadata: {}
    providerSpec:
      value:
        apiVersion: libvirtproviderconfig.openshift.io/v1beta1
        autostart: false
        cloudInit: null
        domainMemory: 24576
        domainVcpu: 8
        ignKey: ""
        ignition:
          userDataSecret: master-user-data
        kind: LibvirtMachineProviderConfig
        networkInterfaceAddress: 192.168.126.0/24
        networkInterfaceHostname: ""
        networkInterfaceName: ocp4d-c5tvf
        networkUUID: ""
        uri: qemu+tcp://172.21.122.1/system
        volume:
          baseVolumeID: ocp4d-c5tvf-base
          poolName: ocp4d-c5tvf
          volumeName: ""
          volumeSize: 274877906944
  status:
    addresses:
    - address: 192.168.126.11
      type: InternalIP
    - address: ocp4d-c5tvf-master-0
      type: Hostname
    - address: ocp4d-c5tvf-master-0
      type: InternalDNS
    lastUpdated: "2021-11-25T21:36:47Z"
    nodeRef:
      kind: Node
      name: ocp4d-c5tvf-master-0
      uid: e5c4d2dc-c60d-4e6c-84c6-534e32627c9f
    phase: Running
    providerStatus:
      apiVersion: libvirtproviderconfig.openshift.io/v1beta1
      conditions: null
      instanceID: 7512f14c-cecc-4a8a-a648-24c3640ccec6
      instanceState: Running
      kind: LibvirtMachineProviderStatus
- apiVersion: machine.openshift.io/v1beta1
  kind: Machine
  metadata:
    creationTimestamp: "2021-11-25T21:20:18Z"
    finalizers:
    - machine.machine.openshift.io
    generation: 1
    labels:
      machine.openshift.io/cluster-api-cluster: ocp4d-c5tvf
      machine.openshift.io/cluster-api-machine-role: master
      machine.openshift.io/cluster-api-machine-type: master
    name: ocp4d-c5tvf-master-1
    namespace: openshift-machine-api
    resourceVersion: "23002"
    uid: 32a9e031-659d-4574-a40f-b74e4ff75aa2
  spec:
    metadata: {}
    providerSpec:
      value:
        apiVersion: libvirtproviderconfig.openshift.io/v1beta1
        autostart: false
        cloudInit: null
        domainMemory: 8192
        domainVcpu: 4
        ignKey: ""
        ignition:
          userDataSecret: master-user-data
        kind: LibvirtMachineProviderConfig
        networkInterfaceAddress: 192.168.126.0/24
        networkInterfaceHostname: ""
        networkInterfaceName: ocp4d-c5tvf
        networkUUID: ""
        uri: qemu+tcp://172.21.122.1/system
        volume:
          baseVolumeID: ocp4d-c5tvf-base
          poolName: ocp4d-c5tvf
          volumeName: ""
  status:
    addresses:
    - address: 192.168.126.12
      type: InternalIP
    - address: ocp4d-c5tvf-master-1
      type: Hostname
    - address: ocp4d-c5tvf-master-1
      type: InternalDNS
    lastUpdated: "2021-11-25T21:36:47Z"
    nodeRef:
      kind: Node
      name: ocp4d-c5tvf-master-1
      uid: 17ef4804-c5d5-401b-978c-fef91f715169
    phase: Running
    providerStatus:
      apiVersion: libvirtproviderconfig.openshift.io/v1beta1
      conditions: null
      instanceID: 8dddfb58-7e38-4105-a98a-6cb085fff45e
      instanceState: Running
      kind: LibvirtMachineProviderStatus
- apiVersion: machine.openshift.io/v1beta1
  kind: Machine
  metadata:
    creationTimestamp: "2021-11-25T21:20:18Z"
    finalizers:
    - machine.machine.openshift.io
    generation: 1
    labels:
      machine.openshift.io/cluster-api-cluster: ocp4d-c5tvf
      machine.openshift.io/cluster-api-machine-role: master
      machine.openshift.io/cluster-api-machine-type: master
    name: ocp4d-c5tvf-master-2
    namespace: openshift-machine-api
    resourceVersion: "23004"
    uid: a40f806d-09fd-4409-90c2-05d3deafbd02
  spec:
    metadata: {}
    providerSpec:
      value:
        apiVersion: libvirtproviderconfig.openshift.io/v1beta1
        autostart: false
        cloudInit: null
        domainMemory: 8192
        domainVcpu: 4
        ignKey: ""
        ignition:
          userDataSecret: master-user-data
        kind: LibvirtMachineProviderConfig
        networkInterfaceAddress: 192.168.126.0/24
        networkInterfaceHostname: ""
        networkInterfaceName: ocp4d-c5tvf
        networkUUID: ""
        uri: qemu+tcp://172.21.122.1/system
        volume:
          baseVolumeID: ocp4d-c5tvf-base
          poolName: ocp4d-c5tvf
          volumeName: ""
  status:
    addresses:
    - address: 192.168.126.13
      type: InternalIP
    - address: ocp4d-c5tvf-master-2
      type: Hostname
    - address: ocp4d-c5tvf-master-2
      type: InternalDNS
    lastUpdated: "2021-11-25T21:36:47Z"
    nodeRef:
      kind: Node
      name: ocp4d-c5tvf-master-2
      uid: a4f8fbec-17f6-4f3a-9579-458d0febb950
    phase: Running
    providerStatus:
      apiVersion: libvirtproviderconfig.openshift.io/v1beta1
      conditions: null
      instanceID: 8518304c-7e8b-46b0-b18c-1a86a7ba6f9b
      instanceState: Running
      kind: LibvirtMachineProviderStatus
- apiVersion: machine.openshift.io/v1beta1
  kind: Machine
  metadata:
    creationTimestamp: "2021-11-25T21:23:24Z"
    finalizers:
    - machine.machine.openshift.io
    generateName: ocp4d-c5tvf-worker-0-
    generation: 1
    labels:
      machine.openshift.io/cluster-api-cluster: ocp4d-c5tvf
      machine.openshift.io/cluster-api-machine-role: worker
      machine.openshift.io/cluster-api-machine-type: worker
      machine.openshift.io/cluster-api-machineset: ocp4d-c5tvf-worker-0
    name: ocp4d-c5tvf-worker-0-d7dzc
    namespace: openshift-machine-api
    ownerReferences:
    - apiVersion: machine.openshift.io/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: MachineSet
      name: ocp4d-c5tvf-worker-0
      uid: 05270d70-da6f-4b45-84f7-fc519e388f6b
    resourceVersion: "8467"
    uid: 7212d72a-50f6-42ef-a0df-555128dbd226
  spec:
    metadata: {}
    providerSpec:
      value:
        apiVersion: libvirtproviderconfig.openshift.io/v1beta1
        autostart: false
        cloudInit: null
        domainMemory: 49152
        domainVcpu: 8
        ignKey: ""
        ignition:
          userDataSecret: worker-user-data
        kind: LibvirtMachineProviderConfig
        networkInterfaceAddress: 192.168.126.0/24
        networkInterfaceHostname: ""
        networkInterfaceName: ocp4d-c5tvf
        networkUUID: ""
        uri: qemu+tcp://172.21.122.1/system
        volume:
          baseVolumeID: ocp4d-c5tvf-base
          poolName: ocp4d-c5tvf
          volumeName: ""
          volumeSize: 274877906944
  status:
    lastUpdated: "2021-11-25T21:23:27Z"
    phase: Provisioning
- apiVersion: machine.openshift.io/v1beta1
  kind: Machine
  metadata:
    creationTimestamp: "2021-11-25T21:23:24Z"
    finalizers:
    - machine.machine.openshift.io
    generateName: ocp4d-c5tvf-worker-0-
    generation: 1
    labels:
      machine.openshift.io/cluster-api-cluster: ocp4d-c5tvf
      machine.openshift.io/cluster-api-machine-role: worker
      machine.openshift.io/cluster-api-machine-type: worker
      machine.openshift.io/cluster-api-machineset: ocp4d-c5tvf-worker-0
    name: ocp4d-c5tvf-worker-0-s8m7x
    namespace: openshift-machine-api
    ownerReferences:
    - apiVersion: machine.openshift.io/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: MachineSet
      name: ocp4d-c5tvf-worker-0
      uid: 05270d70-da6f-4b45-84f7-fc519e388f6b
    resourceVersion: "8367"
    uid: c2228b56-78a6-475f-af59-01dc1670b9ad
  spec:
    metadata: {}
    providerSpec:
      value:
        apiVersion: libvirtproviderconfig.openshift.io/v1beta1
        autostart: false
        cloudInit: null
        domainMemory: 49152
        domainVcpu: 8
        ignKey: ""
        ignition:
          userDataSecret: worker-user-data
        kind: LibvirtMachineProviderConfig
        networkInterfaceAddress: 192.168.126.0/24
        networkInterfaceHostname: ""
        networkInterfaceName: ocp4d-c5tvf
        networkUUID: ""
        uri: qemu+tcp://172.21.122.1/system
        volume:
          baseVolumeID: ocp4d-c5tvf-base
          poolName: ocp4d-c5tvf
          volumeName: ""
          volumeSize: 274877906944
  status:
    lastUpdated: "2021-11-25T21:23:26Z"
    phase: Provisioning
- apiVersion: machine.openshift.io/v1beta1
  kind: Machine
  metadata:
    creationTimestamp: "2021-11-25T21:23:24Z"
    finalizers:
    - machine.machine.openshift.io
    generateName: ocp4d-c5tvf-worker-0-
    generation: 1
    labels:
      machine.openshift.io/cluster-api-cluster: ocp4d-c5tvf
      machine.openshift.io/cluster-api-machine-role: worker
      machine.openshift.io/cluster-api-machine-type: worker
      machine.openshift.io/cluster-api-machineset: ocp4d-c5tvf-worker-0
    name: ocp4d-c5tvf-worker-0-skd9j
    namespace: openshift-machine-api
    ownerReferences:
    - apiVersion: machine.openshift.io/v1beta1
      blockOwnerDeletion: true
      controller: true
      kind: MachineSet
      name: ocp4d-c5tvf-worker-0
      uid: 05270d70-da6f-4b45-84f7-fc519e388f6b
    resourceVersion: "8528"
    uid: 89562158-3bb6-4245-9640-ce8d337d3219
  spec:
    metadata: {}
    providerSpec:
      value:
        apiVersion: libvirtproviderconfig.openshift.io/v1beta1
        autostart: false
        cloudInit: null
        domainMemory: 49152
        domainVcpu: 8
        ignKey: ""
        ignition:
          userDataSecret: worker-user-data
        kind: LibvirtMachineProviderConfig
        networkInterfaceAddress: 192.168.126.0/24
        networkInterfaceHostname: ""
        networkInterfaceName: ocp4d-c5tvf
        networkUUID: ""
        uri: qemu+tcp://172.21.122.1/system
        volume:
          baseVolumeID: ocp4d-c5tvf-base
          poolName: ocp4d-c5tvf
          volumeName: ""
          volumeSize: 274877906944
  status:
    lastUpdated: "2021-11-25T21:23:29Z"
    phase: Provisioning
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

apiVersion: v1
items:
- apiVersion: machine.openshift.io/v1beta1
  kind: Machine
  metadata:
    creationTimestamp: "2021-11-25T21:20:19Z"
    finalizers:
    - machine.machine.openshift.io
    generation: 1
    labels:
      machine.openshift.io/cluster-api-cluster: ocp4d-c5tvf
      machine.openshift.io/cluster-api-machine-role: master
      machine.openshift.io/cluster-api-machine-type: master
    name: ocp4d-c5tvf-master-0
    namespace: openshift-machine-api
    resourceVersion: "23000"
    uid: 082eb65f-ef9d-4962-a94d-b5fa54dbd100
  spec:
    metadata: {}
    providerSpec:
      value:
        apiVersion: libvirtproviderconfig.openshift.io/v1beta1
        autostart: false
        cloudInit: null
        domainMemory: 24576
        domainVcpu: 8
        ignKey: ""
        ignition:
          userDataSecret: master-user-data
        kind: LibvirtMachineProviderConfig
        networkInterfaceAddress: 192.168.126.0/24
        networkInterfaceHostname: ""
        networkInterfaceName: ocp4d-c5tvf
        networkUUID: ""
        uri: qemu+tcp://172.21.122.1/system
        volume:
          baseVolumeID: ocp4d-c5tvf-base
          poolName: ocp4d-c5tvf
          volumeName: ""
          volumeSize: 274877906944
  status:
    addresses:
    - address: 192.168.126.11
      type: InternalIP
    - address: ocp4d-c5tvf-master-0
      type: Hostname
    - address: ocp4d-c5tvf-master-0
      type: InternalDNS
    lastUpdated: "2021-11-25T21:36:47Z"
    nodeRef:
      kind: Node
      name: ocp4d-c5tvf-master-0
      uid: e5c4d2dc-c60d-4e6c-84c6-534e32627c9f
    phase: Running
    providerStatus:
      apiVersion: libvirtproviderconfig.openshift.io/v1beta1
      conditions: null
      instanceID: 7512f14c-cecc-4a8a-a648-24c3640ccec6
      instanceState: Running
      kind: LibvirtMachineProviderStatus
- apiVersion: machine.openshift.io/v1beta1
  kind: Machine
  metadata:
    creationTimestamp: "2021-11-25T21:20:18Z"
    finalizers:
    - machine.machine.openshift.io
    generation: 1
    labels:
      machine.openshift.io/cluster-api-cluster: ocp4d-c5tvf
      machine.openshift.io/cluster-api-machine-role: master
      machine.openshift.io/cluster-api-machine-type: master
    name: ocp4d-c5tvf-master-1
    namespace: openshift-machine-api
    resourceVersion: "23002"
    uid: 32a9e031-659d-4574-a40f-b74e4ff75aa2
  spec:
    metadata: {}
    providerSpec:
      value:
        apiVersion: libvirtproviderconfig.openshift.io/v1beta1
        autostart: false
        cloudInit: null
        domainMemory: 8192
        domainVcpu: 4
        ignKey: ""
        ignition:
          userDataSecret: master-user-data
        kind: LibvirtMachineProviderConfig
        networkInterfaceAddress: 192.168.126.0/24
        networkInterfaceHostname: ""
        networkInterfaceName: ocp4d-c5tvf

@ElCoyote27
Copy link
Author

# oc describe machine ocp4d-c5tvf-worker-0-d7dzc|tail -10
        Volume Size:     274877906944
Status:
  Last Updated:  2021-11-25T21:23:27Z
  Phase:         Provisioning
Events:
  Type     Reason        Age                   From                Message
  ----     ------        ----                  ----                -------
  Warning  FailedCreate  21m (x2 over 21m)     libvirt-controller  CreateError
  Warning  FailedCreate  17m (x15 over 18m)    libvirt-controller  CreateError
  Warning  FailedCreate  2m42s (x19 over 14m)  libvirt-controller  CreateError

@ElCoyote27
Copy link
Author

I'm waiting for the machine config to fail and will provide another yaml

@ElCoyote27
Copy link
Author

There's also one machineset (workers only):

# oc get machinesets 
NAME                   DESIRED   CURRENT   READY   AVAILABLE   AGE
ocp4d-c5tvf-worker-0   3         3                             28m

@ElCoyote27
Copy link
Author

I'm also seeing those messages in the system's log:

`Nov 25 16:52:36 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
Nov 25 16:52:39 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
Nov 25 16:52:40 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
Nov 25 16:52:41 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
Nov 25 16:52:49 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
Nov 25 16:52:50 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
Nov 25 16:52:51 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
Nov 25 16:53:10 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
Nov 25 16:53:11 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
Nov 25 16:53:12 daltigoth libvirtd[1099001]: Operation not supported: can't update 'bridge' section of network 'ocp4d-c5tvf'
``

@ElCoyote27
Copy link
Author

this seems somewhat similar to:
digitalocean/go-libvirt#87

The network created by libvirt ipi looks like this:

  <name>ocp4d-c5tvf</name>
  <uuid>18882b7e-9ac7-4089-bb28-2316ecbd2dbb</uuid>
  <forward mode='nat'>
    <nat>
      <port start='1024' end='65535'/>
    </nat>
  </forward>
  <bridge name='tt0' stp='on' delay='0'/>
  <mac address='52:54:00:27:65:df'/>
  <domain name='ocp4d.openshift.lasthome.solace.krynn' localOnly='yes'/>
  <dns enable='yes'>
    <forwarder domain='apps.ocp4d.openshift.lasthome.solace.krynn' addr='192.168.122.1'/>
    <host ip='192.168.126.12'>
      <hostname>api.ocp4d.openshift.lasthome.solace.krynn</hostname>
      <hostname>api-int.ocp4d.openshift.lasthome.solace.krynn</hostname>
    </host>
    <host ip='192.168.126.13'>
      <hostname>api.ocp4d.openshift.lasthome.solace.krynn</hostname>
      <hostname>api-int.ocp4d.openshift.lasthome.solace.krynn</hostname>
    </host>
    <host ip='192.168.126.11'>
      <hostname>api.ocp4d.openshift.lasthome.solace.krynn</hostname>
      <hostname>api-int.ocp4d.openshift.lasthome.solace.krynn</hostname>
    </host>
  </dns>
  <ip family='ipv4' address='192.168.126.1' prefix='24'>
    <dhcp>
      <range start='192.168.126.2' end='192.168.126.254'/>
      <host mac='52:54:00:b2:fb:a2' name='ocp4d-c5tvf-master-1.ocp4d.openshift.lasthome.solace.krynn' ip='192.168.126.12'/>
      <host mac='52:54:00:d2:10:54' name='ocp4d-c5tvf-bootstrap.ocp4d.openshift.lasthome.solace.krynn' ip='192.168.126.10'/>
      <host mac='52:54:00:32:97:83' name='ocp4d-c5tvf-master-2.ocp4d.openshift.lasthome.solace.krynn' ip='192.168.126.13'/>
      <host mac='52:54:00:4a:ce:da' name='ocp4d-c5tvf-master-0.ocp4d.openshift.lasthome.solace.krynn' ip='192.168.126.11'/>
    </dhcp>
  </ip>
</network>

@staebler
Copy link
Contributor

As this is an issue with creating workers, this does not appear to be an installer issue. I recommend opening an issue in https://github.com/openshift/cluster-api-provider-libvirt.

@ElCoyote27
Copy link
Author

ElCoyote27 commented Nov 29, 2021

@staebler this is interesting.. Where does the openshift installer get that provider from? All I'm doing is downloading the openshift-installer source code.. Is it amongst the dependencies that are downloaded by the go installer?

@staebler
Copy link
Contributor

The installer just sets up some infrastructure and give some configuration to the bootstrap VM to ultimately build the cluster. The various images that make up the cluster come from the release payload to which the installer belongs.

For example, you can find details for the latest OCP release image (4.9.9) at https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.9.9/release.txt.
The release image is quay.io/openshift-release-dev/ocp-release@sha256:dc6d4d8b2f9264c0037ed0222285f19512f112cc85a355b14a66bd6b910a4940.
It uses quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bdfb1ae492c65214d4119b1aa1ea27162a4f8f84992feeddfa6b47ea53c16b87 for the libvirt-machine-controllers image.
Note that the installer itself is part of the release image as well, and it is quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:61b0f37bdd997903544c4472500b2bc30ea1e4e2e83b1d33132db7ae9ddbf31b.

@jaypoulz
Copy link
Contributor

jaypoulz commented Feb 5, 2022

Just ran into this as well - I'll look into Monday

@ElCoyote27
Copy link
Author

Hi @jaypoulz I was working with @cfergeau on this and we already have a BZ for this:
https://bugzilla.redhat.com/show_bug.cgi?id=2038812

In the interim, I've reverted from virt:av to virt:rhel

@ElCoyote27
Copy link
Author

@jaypoulz
Copy link
Contributor

jaypoulz commented Feb 5, 2022

@ElCoyote27 I encountered it in stock RHEL 8.5. I had to downgrade libvirt-6.0.0-37.1.module+el8.5.0+13858+39fdc467.aarch64 to 6.0.0-37.module+el8.5.0+12162+40884dd2.aarch64. So watch out for the latest 8.5 updates. CC @cfergeau

@ElCoyote27
Copy link
Author

@jaypoulz OMG, this sucks if we backported the problematic patch down to virt:rhel too. My 8.5 systems got the update on Feb 2nd, trying to confirm if this broke OCP for me too...

@jaypoulz
Copy link
Contributor

jaypoulz commented Feb 5, 2022

Our systems got upgraded today, so the search for the breaking change was short. 😸 OpenShift installer was built on the latest 4.10 preview. We can use that to look up the libvirt-terraform version if need be. I'll try to get a reproducer Monday.

@ElCoyote27
Copy link
Author

My RHEL 8.5 system was running libvirt 6.0.0-37.module+el8.5.0+12162+40884dd2
I received the 6.0.0-37.1.module+el8.5.0+13858+39fdc467 updates two days ago but I hadn't restarted libvirt (which is why OCP on libvirt was still working).
As soon as I restarted libvirt, things stopped working with ocp_libvirt_ipi.

@ElCoyote27
Copy link
Author

I've updated https://bugzilla.redhat.com/show_bug.cgi?id=2038812 to provide the information you reported (and confirm your findings).

@ElCoyote27
Copy link
Author

I can confirm that the original issue I got on virt:av is back:

[root@daltigoth ~]# oc get machines -A
NAMESPACE               NAME                         PHASE          TYPE   REGION   ZONE   AGE
openshift-machine-api   ocp4d-4cfd9-master-0         Running                               11m
openshift-machine-api   ocp4d-4cfd9-master-1         Running                               11m
openshift-machine-api   ocp4d-4cfd9-master-2         Running                               11m
openshift-machine-api   ocp4d-4cfd9-worker-0-bjml6   Provisioning                          8m14s
openshift-machine-api   ocp4d-4cfd9-worker-0-w6fnq   Provisioning                          8m14s
openshift-machine-api   ocp4d-4cfd9-worker-0-xpf65   Provisioning                          8m14s

@ElCoyote27
Copy link
Author

In the libvirtd log:

Feb 05 20:00:41 ravenvale libvirtd[4978]: Operation not supported: can't update 'bridge' section of network 'ocp4r-hztzv'
Feb 05 20:00:42 ravenvale libvirtd[4978]: Operation not supported: can't update 'bridge' section of network 'ocp4r-hztzv'
Feb 05 20:00:43 ravenvale libvirtd[4978]: Operation not supported: can't update 'bridge' section of network 'ocp4r-hztzv'
Feb 05 20:00:51 ravenvale libvirtd[4978]: Operation not supported: can't update 'bridge' section of network 'ocp4r-hztzv'
Feb 05 20:00:52 ravenvale libvirtd[4978]: Operation not supported: can't update 'bridge' section of network 'ocp4r-hztzv'
Feb 05 20:00:53 ravenvale libvirtd[4978]: Operation not supported: can't update 'bridge' section of network 'ocp4r-hztzv'
Feb 05 20:01:12 ravenvale libvirtd[4978]: Operation not supported: can't update 'bridge' section of network 'ocp4r-hztzv'
Feb 05 20:01:13 ravenvale libvirtd[4978]: Operation not supported: can't update 'bridge' section of network 'ocp4r-hztzv'
Feb 05 20:01:14 ravenvale libvirtd[4978]: Operation not supported: can't update 'bridge' section of network 'ocp4r-hztzv'

@zippy2
Copy link

zippy2 commented Feb 7, 2022

Is it possible that the installer uses terraform-provider-libvirt which in turn has this commit?

dmacvicar/terraform-provider-libvirt@0d74474

Because if it is so, then it's actually terrraform-provider who swaps the arguments. BTW that commit is horribly wrong, let me comment on it.

@ElCoyote27
Copy link
Author

@zippy2 Hi Michael, the terraform part of the install process (bootstrap + master) works fine, it is the OCP-piloting-libvirt phase that broke recently in virt:rhel. at that point OCP is using the machine-config-api with an unencrypted libvirt priovate URI.

@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 8, 2022
@ElCoyote27
Copy link
Author

/remove-lifecycle stale.

@ElCoyote27
Copy link
Author

@zippy2 Looks like it, grepping dmacvicar in the source of 4.10.0, I see all this:
[root@daltigoth /opt/OCP-4/go]# grep -r dmacvicar
src/github.com/openshift/installer/go.mod: github.com/dmacvicar/terraform-provider-libvirt v0.6.4-0.20201216193629-2b60d7626ff8
src/github.com/openshift/installer/vendor/github.com/dmacvicar/terraform-provider-libvirt/libvirt/resource_libvirt_domain.go: "github.com/dmacvicar/terraform-provider-libvirt/l
ibvirt/helper/suppress"
src/github.com/openshift/installer/vendor/github.com/dmacvicar/terraform-provider-libvirt/libvirt/resource_libvirt_volume.go: // see for reference: https://gith
ub.com/dmacvicar/terraform-provider-libvirt/issues/494
src/github.com/openshift/installer/vendor/github.com/dmacvicar/terraform-provider-libvirt/libvirt/resource_libvirt_network.go: // see https://github.com/dmacvicar/terraf
orm-provider-libvirt/issues/739
src/github.com/openshift/installer/vendor/modules.txt:# github.com/dmacvicar/terraform-provider-libvirt v0.6.4-0.20201216193629-2b60d7626ff8
src/github.com/openshift/installer/vendor/modules.txt:github.com/dmacvicar/terraform-provider-libvirt/libvirt
src/github.com/openshift/installer/vendor/modules.txt:github.com/dmacvicar/terraform-provider-libvirt/libvirt/helper/suppress
Binary file src/github.com/openshift/installer/bin/openshift-install matches
src/github.com/openshift/installer/go.sum:github.com/dmacvicar/terraform-provider-libvirt v0.6.4-0.20201216193629-2b60d7626ff8 h1:mTHVoBkXbLg/Yyi2BxT+N5y3v1/KjktwnEBls37g5Ds=
src/github.com/openshift/installer/go.sum:github.com/dmacvicar/terraform-provider-libvirt v0.6.4-0.20201216193629-2b60d7626ff8/go.mod h1:RZqLUAMFQ32TmKpk1Ayb4zeTe7+7k0jfsWpW1UTqV
Dw=
src/github.com/openshift/installer/pkg/terraform/exec/plugins/libvirt.go: "github.com/dmacvicar/terraform-provider-libvirt/libvirt"

@zippy2
Copy link

zippy2 commented May 9, 2022

In that case I'm not sure I can help. Sorry. Fixed packages were shipped ~3 months ago. I guess your best bet is to talk to developers of installer to fix their code. There's a solution suggested:

dmacvicar/terraform-provider-libvirt@0d74474#commitcomment-68720367

Since terraform-provider talks directly to RPC, they have to take that extra step and check whether they are talking to a daemon that is fixed or not. Another advantage of using client library rather than talking on RPC directly.

@cfergeau
Copy link
Contributor

Nope, the installer is not using this commit

@zippy2 Looks like it, grepping dmacvicar in the source of 4.10.0, I see all this:

[root@daltigoth /opt/OCP-4/go]# grep -r dmacvicar
src/github.com/openshift/installer/go.mod: github.com/dmacvicar/terraform-provider-libvirt v0.6.4-0.20201216193629-2b60d7626ff8

2b60d7626ff8 is v0.6.9-pre1~83 (after running git describe --contains 2b60d7626ff8 in a github.com/dmacvicar/terraform-provider-libvirt checkout)

The problematic commit is dmacvicar/terraform-provider-libvirt@0d74474 which is described as v0.6.9-pre1~5. This is newer than the commit used by the installer.
This is consistent with the code vendored by the installer:

// Adds a new static host to the network
func addHost(n *libvirt.Network, ip, mac, name string, xmlIdx int) error {
xmlDesc := getHostXMLDesc(ip, mac, name)
log.Printf("Adding host with XML:\n%s", xmlDesc)
// From https://libvirt.org/html/libvirt-libvirt-network.html#virNetworkUpdateFlags
// Update live and config for hosts to make update permanent across reboots
return n.Update(libvirt.NETWORK_UPDATE_COMMAND_ADD_LAST, libvirt.NETWORK_SECTION_IP_DHCP_HOST, xmlIdx, xmlDesc, libvirt.NETWORK_UPDATE_AFFECT_CONFIG|libvirt.NETWORK_UPDATE_AFFECT_LIVE)
}

@cfergeau
Copy link
Contributor

The master and 4.11 branches of github.com/openshift/installer are using github.com/dmacvicar/terraform-provider-libvirt v0.6.12 though which does have the problematic commit :(

@cfergeau
Copy link
Contributor

cfergeau commented May 12, 2022

I've filed dmacvicar/terraform-provider-libvirt#950 which should fix the issue introduced in dmacvicar/terraform-provider-libvirt@0d74474
This probably belongs in digital-ocean/go-libvirt rather than there.
EDIT: the go-libvirt patch is digitalocean/go-libvirt#148

@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 16, 2022
@cfergeau
Copy link
Contributor

The digitalocean patch was merged at about the time the stale bot was triggered on this issue :)
digitalocean/go-libvirt#148

I've updated the terraform-provider-libvirt PR dmacvicar/terraform-provider-libvirt#950 to make use of this.

@cfergeau
Copy link
Contributor

/remove-lifecycle rotten

@openshift-ci openshift-ci bot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 16, 2022
@openshift-bot
Copy link
Contributor

Issues go stale after 90d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle stale

@openshift-ci openshift-ci bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 15, 2022
@openshift-bot
Copy link
Contributor

Stale issues rot after 30d of inactivity.

Mark the issue as fresh by commenting /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
Exclude this issue from closing by commenting /lifecycle frozen.

If this issue is safe to close now please do so with /close.

/lifecycle rotten
/remove-lifecycle stale

@openshift-ci openshift-ci bot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 15, 2022
@openshift-bot
Copy link
Contributor

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

@openshift-ci openshift-ci bot closed this as completed Nov 15, 2022
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Nov 15, 2022

@openshift-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.

Reopen the issue by commenting /reopen.
Mark the issue as fresh by commenting /remove-lifecycle rotten.
Exclude this issue from closing again by commenting /lifecycle frozen.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants