Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot undefine domain with nvram attached: Call to virDomainUndefineFlags failed: #1371

Closed
abbbi opened this issue Oct 5, 2021 · 19 comments

Comments

@abbbi
Copy link
Contributor

abbbi commented Oct 5, 2021

hi,

attempting to get windows 11 going (sight), it needs to use UEFI. While that works nicely on debian, it
seems i need to use nvram settings on centos8* hosts, unfortunately, i cant undefine a virtual
machine if nvram setting is active:

`/home/buildtest/.vagrant.d/gems/2.6.6/gems/fog-libvirt-0.9.0/lib/fog/libvirt/requests/compute/vm_action.rb:7:in `undefine': Call to virDomainUndefineFlags failed: Requested operation is not valid: cannot undefine domain with nvram (Libvirt::Error

Example vagrant file settings:


        domain.loader = "/usr/share/OVMF/OVMF_CODE.secboot.fd"
        domain.nvram = true

Guess thats an issue in fog-libvirt, not setting VIR_DOMAIN_UNDEFINE_NVRAM flag while callling virDomainUndefine

@electrofelix
Copy link
Contributor

#1329 should solve this, just needs the related to land in fog-libvirt and then for the dependency long with the PR setting this flag to land here. Might be no harm pinging the fog-libvirt devs

@electrofelix
Copy link
Contributor

Actually looks like the PR for fog-libvirt fog/fog-libvirt#102 needs a minor update

@Jeansen
Copy link

Jeansen commented Nov 22, 2021

Is there any workaround this? I know this from destroying a VM and added a trigger that simply manipulates the XML and removes relevant entreis. But currently, I get this error when a domain is being created ....

@Jeansen
Copy link

Jeansen commented Nov 22, 2021

Looks like this problem came in with the latest 0.7.0 version. Up to 0.6.3 my Vagrant script works fine.

@electrofelix
Copy link
Contributor

@Jeansen can you share a debug log? There wasn't anything done between 0.6.3 and 0.7.0 that should have impacted nvram definitions

@Jeansen
Copy link

Jeansen commented Nov 22, 2021

@electrofelix Hm..strange. I'll try around a bit with different version to eliminate any side effects. I'll post my results then here.

@Jeansen
Copy link

Jeansen commented Nov 22, 2021

Ah, I am sorry. I had a typo. It is version 0.5.3 that works for me. Here are some excerpts of my script.

From version 0.4.1 to 0.5.3 some more Error output appeared. But it still works. But with version 0.6 and above, I geht the stack trace you see.

Unfortunately, this project is still something I use for my personal playground and not yet on github and with some additional scripts involved, quite complex.

Any, if you need some more insight etc, pleae let me know. I'll try to provide, what I can.

With 0.4.1

Bringing machine 'efi-test' up with 'libvirt' provider...
==> efi-test: Running action triggers before up ...
==> efi-test: Running trigger...
==> efi-test: Adding synced folders.
==> efi-test: Running action triggers before VagrantPlugins::ProviderLibvirt::Action::StartDomain ...
==> efi-test: Running trigger...
==> efi-test: Setup pool for Test efi.
    efi-test: Running local: Inline script
    efi-test: bash -c 'export LIBVIRT_DEFAULT_URI=qemu:///system;
    efi-test:               virt-xml bcrm_test_efi-test --edit model=lsilogic --controller model=virtio-scsi;
    efi-test:               virsh detach-disk bcrm_test_efi-test sda --config;
    efi-test:               virsh attach-disk bcrm_test_efi-test /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/disks/libvirt/efi/disk2_1-3.qcow2 sda --targetbus scsi --driver qemu --subdriver qcow2 --type disk --config'
    efi-test: ERROR    No matching objects found for --edit model=lsilogic
    efi-test:
    efi-test: Disk detached successfully
    efi-test:
    efi-test:
    efi-test: Disk attached successfully
    efi-test:
    efi-test:
==> efi-test: Starting domain.
==> efi-test: Error when updating domain settings: Call to virDomainUndefineFlags failed: Requested operation is not valid: cannot undefine domain with nvram
==> efi-test: Waiting for domain to get an IP address...
==> efi-test: Waiting for SSH to become available...
==> efi-test: Creating shared folders metadata...
==> efi-test: Exporting NFS shared folders...
==> efi-test: Preparing to edit /etc/exports. Administrator privileges will be required...
==> efi-test: Mounting NFS shared folders...
==> efi-test: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> efi-test: flag to force provisioning. Provisioners marked to run always will still run.

With 0.5.0 to 0.5.3:

Bringing machine 'efi-test' up with 'libvirt' provider...
==> efi-test: Running action triggers before up ...
==> efi-test: Running trigger...
==> efi-test: Adding synced folders.
==> efi-test: Running action triggers before VagrantPlugins::ProviderLibvirt::Action::StartDomain ...
==> efi-test: Running trigger...
==> efi-test: Setup pool for Test efi.
    efi-test: Running local: Inline script
    efi-test: bash -c 'export LIBVIRT_DEFAULT_URI=qemu:///system;
    efi-test:               virt-xml bcrm_test_efi-test --edit model=lsilogic --controller model=virtio-scsi;
    efi-test:               virsh detach-disk bcrm_test_efi-test sda --config;
    efi-test:               virsh attach-disk bcrm_test_efi-test /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/disks/libvirt/efi/disk2_1-4.qcow2 sda --targetbus scsi --driver qemu --subdriver qcow2 --type disk --config'
    efi-test: ERROR    No matching objects found for --edit model=lsilogic
    efi-test:
    efi-test: error: No disk found whose source path or target is sda
    efi-test:
    efi-test:
    efi-test:
    efi-test: error: Failed to attach disk
    efi-test: error: Requested operation is not valid: Domain already contains a disk with that address
    efi-test:
    efi-test:
    efi-test:
==> efi-test: Starting domain.
==> efi-test: Error when updating domain settings: Call to virDomainUndefineFlags failed: Requested operation is not valid: cannot undefine domain with nvram
==> efi-test: Waiting for domain to get an IP address...
==> efi-test: Waiting for machine to boot. This may take a few minutes...
    efi-test: SSH address: 192.168.121.248:22
    efi-test: SSH username: vagrant
    efi-test: SSH auth method: private key
    efi-test: Warning: Host unreachable. Retrying...
==> efi-test: Machine booted and ready!
==> efi-test: Creating shared folders metadata...
==> efi-test: Exporting NFS shared folders...
==> efi-test: Preparing to edit /etc/exports. Administrator privileges will be required...
==> efi-test: Mounting NFS shared folders...
==> efi-test: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> efi-test: flag to force provisioning. Provisioners marked to run always will still run.

Since 0.6.0:

Bringing machine 'efi' up with 'libvirt' provider...
==> efi: Running action triggers before up ...
==> efi: Running trigger...
==> efi: Adding synced folders.
==> efi: Uploading base box image as volume into Libvirt storage...
==> efi: Creating image (snapshot of base box volume).
==> efi: Creating domain with the following settings...
==> efi:  -- Name:              bcrm_test_efi
==> efi:  -- Description:       Source: /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/Vagrantfile
==> efi:  -- Domain type:       kvm
==> efi:  -- Cpus:              1
==> efi:  -- Feature:           acpi
==> efi:  -- Feature:           apic
==> efi:  -- Feature:           pae
==> efi:  -- Clock offset:      utc
==> efi:  -- Memory:            8192M
==> efi:  -- Management MAC:
==> efi:  -- Loader:            /usr/share/OVMF/OVMF_CODE_4M.fd
==> efi:  -- Nvram:             /var/lib/libvirt/qemu/nvram/bcrm_test_efi_VARS.fd
==> efi:  -- Base box:          debian/efi
==> efi:  -- Storage pool:      efi
==> efi:  -- Image():     /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/disks/libvirt/efi/bcrm_test_efi.img, 80G
==> efi:  -- Disk driver opts:  cache='default'
==> efi:  -- Kernel:
==> efi:  -- Initrd:
==> efi:  -- Graphics Type:     vnc
==> efi:  -- Graphics Port:     -1
==> efi:  -- Graphics IP:       127.0.0.1
==> efi:  -- Graphics Password: Not defined
==> efi:  -- Video Type:        cirrus
==> efi:  -- Video VRAM:        9216
==> efi:  -- Sound Type:
==> efi:  -- Keymap:            en-us
==> efi:  -- TPM Backend:       passthrough
==> efi:  -- TPM Path:
==> efi:  -- Disks:         vdb(qcow2,80), vdc(qcow2,80)
==> efi:  -- Disk(vdb):     /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/disks/libvirt/efi/disk2_1-1.qcow2 (Remove only manually)
==> efi:  -- Disk(vdc):     /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/disks/libvirt/efi/disk2_1-2.qcow2 (Remove only manually)
==> efi:  -- INPUT:             type=mouse, bus=ps2
==> efi: Creating shared folders metadata...
==> efi: Running action triggers before VagrantPlugins::ProviderLibvirt::Action::StartDomain ...
==> efi: Running trigger...
==> efi: Replace SCSI controller model for efi.
    efi: Running local: Inline script
    efi: bash -c 'export LIBVIRT_DEFAULT_URI=qemu:///system; virt-xml bcrm_test_efi --edit model=lsilogic --controller model=virtio-scsi'
    efi: Domain 'bcrm_test_efi' defined successfully.
    efi:
==> efi: Starting domain.
Traceback (most recent call last):
        62: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/batch_action.rb:86:in `block (2 levels) in run'
        61: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/machine.rb:201:in `action'
        60: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/machine.rb:201:in `call'
        59: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/environment.rb:614:in `lock'
        58: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/machine.rb:215:in `block in action'
        57: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/machine.rb:246:in `action_raw'
        56: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/runner.rb:89:in `run'
        55: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/util/busy.rb:19:in `busy'
        54: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/runner.rb:89:in `block in run'
        53: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builder.rb:149:in `call'
        52: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        51: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/trigger.rb:32:in `call'
        50: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        49: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/config_validate.rb:25:in `call'
        48: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        47: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/box_check_outdated.rb:36:in `call'
        46: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        45: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/call.rb:53:in `call'
        44: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/runner.rb:89:in `run'
        43: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/util/busy.rb:19:in `busy'
        42: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/runner.rb:89:in `block in run'
        41: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builder.rb:149:in `call'
        40: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        39: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:127:in `block in finalize_action'
        38: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        37: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/set_name_of_domain.rb:34:in `call'
        36: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        35: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/handle_storage_pool.rb:63:in `call'
        34: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        33: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/handle_box.rb:56:in `call'
        32: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        31: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/handle_box_image.rb:120:in `call'
        30: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        29: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/create_domain_volume.rb:94:in `call'
        28: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        27: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/create_domain.rb:419:in `call'
        26: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        25: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/provision.rb:80:in `call'
        24: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        23: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:14:in `call'
        22: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        21: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/plugins/synced_folders/nfs/action_cleanup.rb:25:in `call'
        20: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        19: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in `call'
        18: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        17: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/delayed.rb:19:in `call'
        16: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        15: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/synced_folders.rb:87:in `call'
        14: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        13: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:21:in `call'
        12: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
        11: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/share_folders.rb:22:in `call'
        10: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
         9: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/create_networks.rb:93:in `call'
         8: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
         7: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/create_network_interfaces.rb:190:in `call'
         6: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
         5: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/set_boot_order.rb:80:in `call'
         4: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
         3: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/builtin/trigger.rb:32:in `call'
         2: from /opt/vagrant/embedded/gems/2.2.19/gems/vagrant-2.2.19/lib/vagrant/action/warden.rb:48:in `call'
         1: from /media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/start_domain.rb:345:in `call'
/media/marcel/94d15c83-a434-4bd0-98fe-1b9605a8ab5a/bcrm_test/.vagrant.d/gems/2.7.4/gems/vagrant-libvirt-0.6.0/lib/vagrant-libvirt/action/start_domain.rb:345:in `undefine': Call to virDomainUndefineFlags failed: Requested operation is not valid: cannot undefine domain with nvram (Libvirt::Error)

@electrofelix
Copy link
Contributor

I think I know what the problem is. The start domain action was improved to detect when the nvram setting has changed or should be unset, and is trying to then undefine the domain to undo the changes you've made.

Realistically need to push of getting a fix landed in fog-libvirt

@Jeansen
Copy link

Jeansen commented Nov 23, 2021

@electrofelix Yes, I agree. That is also the conclusion I came to. But nothing I could handle with a hook as I did in the destroy stage, where I remove the nvram tag before a machine is destroyed. So, for the time being, I'll stick to an older version. Everything below 0.6.0 works fine for me. But a fix would be nice, of course.

@trinitronx
Copy link

trinitronx commented Jan 18, 2022

@Jeansen yes, @electrofelix is correct: This requires fixes in 3 gems based on Vagrant's dependency chain:

vagrant-libvirt -> fog-libvirt -> ruby-libvirt

I was able to build & install pre-release gems for all of these locally to test. It's working as long as all 3 are installed in Vagrant's internal GEM_HOME (which for me is: ~/.vagrant.d/gems/2.7.4)

I needed the following changes to get this working:

Then, defining nvram file + OVMF loader in Vagrantfile:

Vagrant.configure("2") do |config|

  config.vm.provider :libvirt do |libvirt|
   # [...SNIP...]
    libvirt.loader = '/path/to/OVMF_CODE.fd'
    libvirt.nvram = '/path/to/OVMF_VARS-1024x768.fd'
   # [...SNIP...]
   end
end

Now, vagrant destroy works again for this VM / libvirt "domain".

Here are the pre-release gems that worked for me:

prerelease-gems-for-libvirt-nvram-support.zip

So, if you can't wait for the official gem releases... these ones contain the above patches.
To install these, I had to override GEM_HOME, GEM_PATH, and PATH for vagrant's internal gem & ruby dirs:

export GEM_HOME=${HOME}/.vagrant.d/gems/2.7.4/
export GEM_PATH=${HOME}/.vagrant.d/gems/2.7.4/
export PATH=/opt/vagrant/embedded/bin/:$PATH

# NOTE:  Only the vagrant-libvirt gem was configured to add `.pre`-release suffix to version!
#  Be aware this can cause issues when it comes time to upgrade gems again
#  Also, this isn't technically supported by developers of vagrant or vagrant-libvirt
#  I'd advise avoiding this if you can wait for the new official gems to be released, or if you don't know what you're doing with installing ruby gems.
#  Use at your own risk!
unzip prerelease-gems-for-libvirt-nvram-support.zip
cd prerelease-gems-for-libvirt-nvram-support/
gem install ruby-libvirt-0.8.0.gem
gem install fog-libvirt-0.9.0.gem
gem install vagrant-libvirt-0.7.1.pre.27.gem

@Jeansen
Copy link

Jeansen commented Jan 18, 2022

@trinitronx Oh great. Thanks for the update and your efforts. So far I could help myself by using an older version of vagrant-libvirt (currently 4.x).

@yviel-de
Copy link

yviel-de commented May 2, 2022

I'd like to mention that fog-libvirt mentioned here has been merged.

Looking forward to get nvram support on aarch64!

@Jeansen
Copy link

Jeansen commented May 7, 2022

So, then in the next release #1329 should be available. Looking forward to it.

@electrofelix
Copy link
Contributor

#1329 will be available from the next release, but it'll depend on releases from the two upstream projects before the support is enabled. Just won't need another release of this project in addition as it'll automatically pick up the newer API is available.

electrofelix added a commit to electrofelix/vagrant-libvirt that referenced this issue May 19, 2022
Calling undefine on a domain and recreating it can result in some edge
case errors where if the current capabilities of libvirt have been
reduced, it may not be possible to restore the old definition.

Instead switch to calling `domain_define` with the new definition and
check that the resulting libvirt domain definition has been updated in
the expected manner, otherwise report an error to the user.

Fixes: vagrant-libvirt#949
Relates-to: vagrant-libvirt#1329
Relates-to: vagrant-libvirt#1027
Relates-to: vagrant-libvirt#1371
electrofelix added a commit to electrofelix/vagrant-libvirt that referenced this issue May 19, 2022
Calling undefine on a domain and recreating it can result in some edge
case errors where if the current capabilities of libvirt have been
reduced, it may not be possible to restore the old definition.

Instead switch to calling `domain_define` with the new definition and
check that the resulting libvirt domain definition has been updated in
the expected manner, otherwise report an error to the user.

Fixes: vagrant-libvirt#949
Relates-to: vagrant-libvirt#1329
Relates-to: vagrant-libvirt#1027
Relates-to: vagrant-libvirt#1371
electrofelix added a commit that referenced this issue Jun 3, 2022
Calling undefine on a domain and recreating it can result in some edge
case errors where if the current capabilities of libvirt have been
reduced, it may not be possible to restore the old definition.

Instead switch to calling `domain_define` with the new definition and
check that the resulting libvirt domain definition has been updated in
the expected manner, otherwise report an error to the user.

Fixes: #949
Relates-to: #1329
Relates-to: #1027
Relates-to: #1371
mmguero pushed a commit to mmguero-dev/vagrant-libvirt that referenced this issue Jun 7, 2022
Calling undefine on a domain and recreating it can result in some edge
case errors where if the current capabilities of libvirt have been
reduced, it may not be possible to restore the old definition.

Instead switch to calling `domain_define` with the new definition and
check that the resulting libvirt domain definition has been updated in
the expected manner, otherwise report an error to the user.

Fixes: vagrant-libvirt#949
Relates-to: vagrant-libvirt#1329
Relates-to: vagrant-libvirt#1027
Relates-to: vagrant-libvirt#1371
@Jeansen
Copy link

Jeansen commented Aug 1, 2022

All errors gone. Works like a charm! Thank you 😄

@electrofelix
Copy link
Contributor

Forgot to close this when it was confirmed fixed, thanks @Jeansen

@LKHN
Copy link

LKHN commented Nov 30, 2022

Hi,

I still having this issue with AlmaLinux OS 8.7 UEFI vagrant box on AlmaLinux OS 8.7 host.

I have to destroy the VM with virsh undefine almalinux-8-uefi_default --nvram.

QEMU version:

qemu-kvm-6.2.0-20.module_el8.7.0+3346+68867adb.2.rpm

Libvirt version:

libvirt-8.0.0-10.module_el8.7.0+3346+68867adb.rpm

Vagrant version:

$ vagrant --version
Vagrant 2.3.3

vagrant-libvirt version:

$ vagrant plugin list
vagrant-libvirt (0.10.8, global)

Box: https://app.vagrantup.com/lkhn/boxes/almalinux-8.uefi

Error output:

/home/$USER/.vagrant.d/gems/2.7.6/gems/fog-libvirt-0.9.0/lib/fog/libvirt/requests/compute/vm_action.rb:7:in `undefine': Call to virDomainUndefineFlags failed: Requested operation is not valid: cannot undefine domain with nvram (Libvirt::Error)

Gem versions:

$ pwd
/home/$USER/.vagrant.d/gems/2.7.6/gems

$ ls -1
diffy-3.4.2
fog-core-2.3.0
fog-json-1.2.0
fog-libvirt-0.9.0
fog-xml-0.1.4
formatador-1.1.0
nokogiri-1.13.9-x86_64-linux
ruby-libvirt-0.8.0
vagrant-libvirt-0.10.8
xml-simple-1.1.9

VM Definition:

<domain type='kvm' id='1'>
  <name>almalinux-8-uefi_default</name>
  <uuid>06a019f5-3d65-49f9-8324-f520582bdeab</uuid>
  <description>Source: /home/$USER/almalinux-8-uefi/Vagrantfile</description>
  <memory unit='KiB'>2097152</memory>
  <currentMemory unit='KiB'>2097152</currentMemory>
  <vcpu placement='static'>1</vcpu>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-q35-rhel8.6.0'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
    <nvram>OVMF_VARS.secboot_almalinux-uefi.fd</nvram>
    <boot dev='hd'/>
  </os>
  <features>
    <acpi/>
    <apic/>
    <pae/>
  </features>
  <cpu mode='custom' match='exact' check='full'>
    <model fallback='forbid'>Broadwell-IBRS</model>
    <vendor>Intel</vendor>
    <feature policy='require' name='vme'/>
    <feature policy='require' name='ss'/>
    <feature policy='require' name='vmx'/>
    <feature policy='require' name='pdcm'/>
    <feature policy='require' name='f16c'/>
    <feature policy='require' name='rdrand'/>
    <feature policy='require' name='hypervisor'/>
    <feature policy='require' name='arat'/>
    <feature policy='require' name='tsc_adjust'/>
    <feature policy='require' name='umip'/>
    <feature policy='require' name='md-clear'/>
    <feature policy='require' name='stibp'/>
    <feature policy='require' name='arch-capabilities'/>
    <feature policy='require' name='ssbd'/>
    <feature policy='require' name='xsaveopt'/>
    <feature policy='require' name='pdpe1gb'/>
    <feature policy='require' name='abm'/>
    <feature policy='require' name='ibpb'/>
    <feature policy='require' name='ibrs'/>
    <feature policy='require' name='amd-stibp'/>
    <feature policy='require' name='amd-ssbd'/>
    <feature policy='require' name='skip-l1dfl-vmentry'/>
    <feature policy='require' name='pschange-mc-no'/>
  </cpu>
  <clock offset='utc'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>destroy</on_crash>
  <devices>
    <emulator>/usr/libexec/qemu-kvm</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='qcow2'/>
      <source file='/var/lib/libvirt/images/almalinux-8-uefi_default.img' index='1'/>
      <backingStore type='file' index='2'>
        <format type='qcow2'/>
        <source file='/var/lib/libvirt/images/almalinux-8-uefi_vagrant_box_image_0_1669834560_box.img'/>
        <backingStore/>
      </backingStore>
      <target dev='vda' bus='virtio'/>
      <alias name='ua-box-volume-0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='qemu-xhci'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
    </controller>
    <controller type='sata' index='0'>
      <alias name='ide'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pcie-root'>
      <alias name='pcie.0'/>
    </controller>
    <controller type='pci' index='1' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='1' port='0x10'/>
      <alias name='pci.1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
    </controller>
    <controller type='pci' index='2' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='2' port='0x11'/>
      <alias name='pci.2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
    </controller>
    <controller type='pci' index='3' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='3' port='0x12'/>
      <alias name='pci.3'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
    </controller>
    <controller type='pci' index='4' model='pcie-root-port'>
      <model name='pcie-root-port'/>
      <target chassis='4' port='0x13'/>
      <alias name='pci.4'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
    </controller>
    <interface type='network'>
      <mac address='52:54:00:7e:6e:b9'/>
      <source network='vagrant-libvirt' portid='5b07738e-25e6-4c93-9aa8-714f8b0c755b' bridge='virbr1'/>
      <target dev='vnet0'/>
      <model type='virtio'/>
      <alias name='ua-net-0'/>
      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/3'/>
      <target type='isa-serial' port='0'>
        <model name='isa-serial'/>
      </target>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/3'>
      <source path='/dev/pts/3'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <input type='mouse' bus='ps2'>
      <alias name='input0'/>
    </input>
    <input type='keyboard' bus='ps2'>
      <alias name='input1'/>
    </input>
    <graphics type='vnc' port='5900' autoport='yes' listen='127.0.0.1' keymap='en-us'>
      <listen type='address' address='127.0.0.1'/>
    </graphics>
    <audio id='1' type='none'/>
    <video>
      <model type='cirrus' vram='16384' heads='1' primary='yes'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
    </memballoon>
  </devices>
  <seclabel type='dynamic' model='selinux' relabel='yes'>
    <label>system_u:system_r:svirt_t:s0:c74,c129</label>
    <imagelabel>system_u:object_r:svirt_image_t:s0:c74,c129</imagelabel>
  </seclabel>
  <seclabel type='dynamic' model='dac' relabel='yes'>
    <label>+107:+107</label>
    <imagelabel>+107:+107</imagelabel>
  </seclabel>
</domain>

How to reproduce:

cp /usr/share/OVMF/OVMF_VARS.secboot.fd OVMF_VARS.secboot_almalinux-uefi.fd

Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |config|
    config.vm.box = "almalinux-8-uefi"
    config.vm.hostname = "almalinux8-uefi.test"

    config.vm.provider "libvirt" do |libvirt|
        libvirt.qemu_use_session = false
        libvirt.memory = 2048
        libvirt.loader = "/usr/share/OVMF/OVMF_CODE.secboot.fd"
        libvirt.nvram = "OVMF_VARS.secboot_almalinux-uefi.fd"
        libvirt.machine_type = "q35"
    end
end

vagrant up

Make sure VM is working fine:

vagrant ssh

Destroy the VM

vagrant destroy

@electrofelix
Copy link
Contributor

@LKHN fog/fog-libvirt@v0.9.0...master shows that the upstream change has merged but not yet been released. I'll poke the devs to see if they would release what is there. The code in 0.10.8 will pass the flags required if the API accepts it so it will just start working once there is a release of fog-libvirt

@LKHN
Copy link

LKHN commented Nov 30, 2022

@electrofelix , Thank you very much such a quick reply and fantastic job you're doing here!

I am planning to release the official AlmaLinux OS 8 and 9 UEFI+Secure Boot and then AArch64 vagrant boxes once the destroying works.

e-kov pushed a commit to elastio/elastio-snap that referenced this issue May 10, 2023
It's became possible after the fix in our devboxes where ACPI + NVRAM
is used https://github.com/elastio/devboxes/pull/267 and after `vagrant-libvirt`.
Plus after upgrade on our servers to the version with the fix discussed here
vagrant-libvirt/vagrant-libvirt#1371 (comment)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants