New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lots of autostart VMs with PCI passthrough fail to start #1262

Closed
qubesuser opened this Issue Oct 2, 2015 · 1 comment

Comments

Projects
None yet
2 participants
@qubesuser

Qubes has problems starting VMs with PCI passthrough because Xen apparently requires some special kind of RAM (physically contiguous? in the low 32-bit space?) for them even when an IOMMU is present and active, and Qubes by default assigns all RAM either to VMs or to dom0.

The best workaround for that is to simply start them immediately after boot with autostart.

However, it seems that if one has 5+ VMs with PCI passthrough (which is easy if you assign each network card and each USB controller to a separate VM), then just starting the first VMs is enough to cause the next ones to fail starting, if memory balancing is enabled. If any of those VMs has a NetVM, then the additional VMs starting due to that can also make the situation worse.

The best fix, other than fixing Xen to properly take advantage of IOMMUs, seems to be to change Qubes to start with memory balancing disabled and with as little memory assigned to dom0 as possible, start all PCI passthrough VMs and only then enable memory balancing and start all other autostart VMs. Ideally, NetVMs should be attached to PCI passthrough VMs after all of them are started, rather than starting them beforehand.

A warning should also be added to Qubes Manager telling people that PCI passthrough VM may fail to start if manually started.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 16, 2016

Member

If VM have any PCI device assigned, memory balancing for this VM is automatically disabled.

Anyway, it is very unlikely we'll fix this ever, at least for PV VMs, because of how Xen handle memory assignments and how little control we have over it.
Some option would be to switch to HVM domains (where IOMMU would properly remap memory, so fragmentation/low memory requirement shouldn't be a problem). This is blocked on #1659 though.
BTW For PV domains, IOMMU can only be used to filter memory access, but not remapping (mostly because of how PV works).

Member

marmarek commented Jul 16, 2016

If VM have any PCI device assigned, memory balancing for this VM is automatically disabled.

Anyway, it is very unlikely we'll fix this ever, at least for PV VMs, because of how Xen handle memory assignments and how little control we have over it.
Some option would be to switch to HVM domains (where IOMMU would properly remap memory, so fragmentation/low memory requirement shouldn't be a problem). This is blocked on #1659 though.
BTW For PV domains, IOMMU can only be used to filter memory access, but not remapping (mostly because of how PV works).

@marmarek marmarek closed this Jul 16, 2016

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment