Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upSwitch to PVH #2185
Comments
rootkovska
added
enhancement
C: builder
C: core
C: gui-virtualization
C: installer
C: xen
P: critical
labels
Jul 20, 2016
rootkovska
added this to the Release 4.0 milestone
Jul 20, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 20, 2016
Member
Possible problems: PCI passthrough (as currently broken on HVM - #1659)
|
Possible problems: PCI passthrough (as currently broken on HVM - #1659) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Jeeppler
commented
Jul 26, 2016
|
Does SLAT and PVH improve the guest VM performance (boot time/runtime)? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jul 26, 2016
Member
Additionally:
- have
qubes-hcl-report(also in older releases) check for SLAT
|
Additionally:
|
added a commit
to marmarek/qubes-core-libvirt
that referenced
this issue
Jul 29, 2016
added a commit
to marmarek/qubes-core-libvirt
that referenced
this issue
Jul 29, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
v6ak
Aug 3, 2016
On a non-VT-d computer, does PVH support PCI passthrough?
(If not, I think it would be safe to use PV fallback, as VMs with PCI passthrough are already somehow privileged on non-VT-d computers.)
v6ak
commented
Aug 3, 2016
|
On a non-VT-d computer, does PVH support PCI passthrough? (If not, I think it would be safe to use PV fallback, as VMs with PCI passthrough are already somehow privileged on non-VT-d computers.) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 3, 2016
Member
On Wed, Aug 03, 2016 at 11:10:50AM -0700, Vít Šesták wrote:
On a non-VT-d computer, does PVH support PCI passthrough?
(If not, I think it would be safe to use PV fallback, as VMs with PCI passthrough are already somehow privileged on non-VT-d computers.)
The idea is to not support such hardware in Qubes 4.x, and for older
hardware recommend (besides upgrading the hardware...) Qubes 3.x.
Currently, because of large spectrum of supported hardware, statement "I
use Qubes OS" may mean very different security level. This is
specifically bad for less technical users, who can't reason about
implications of not having particular feature.
Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?
|
On Wed, Aug 03, 2016 at 11:10:50AM -0700, Vít Šesták wrote:
The idea is to not support such hardware in Qubes 4.x, and for older Best Regards, |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 3, 2016
Member
Some major problem: it looks like populate-on-demand doesn't work - domain crashes when started with memory < maxmem. This is a blocker for dynamic memory management (qmemman).
Later lowering memory assignment does work.
|
Some major problem: it looks like populate-on-demand doesn't work - domain crashes when started with memory < maxmem. This is a blocker for dynamic memory management (qmemman). |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 3, 2016
Member
Does SLAT and PVH improve the guest VM performance (boot time/runtime)?
Theoretically yes, but haven't tested it yet.
Theoretically yes, but haven't tested it yet. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
v6ak
Aug 3, 2016
Thanks for the info. I understand the reason, especially with the limited resources for development. The QSB-24 is, however, confusing. It just mentions SLAT/EPT will be needed, but not VT-d, which might sound like 99 % of Qubes users have CPU compatible with the future Qubes 4.0. When counting Qubes 4.0, it does not sound like this, as non-VT-d-compatible machines seem to be more frequent.
Not supporting VT-d in 4.0 is controversial, but it seems to be somehow justified. Moreover, it does not sound like a practical issue for me if it happens in 4.0. My main point is the communication: Please make this future incompatibility more visible. Without thinking about implementation of PVH, I would not have any idea that such issue might arise. All Qubes users should have a real chance to realize the adjusted hardware requirements soon enough. There is arguably some time before the last non-VT-d-requiring release (3.2?) gets unsupported (maybe an year), but one buying a new laptop considering Qubes compatibility should know (or have real chance to know) that fact today.
v6ak
commented
Aug 3, 2016
|
Thanks for the info. I understand the reason, especially with the limited resources for development. The QSB-24 is, however, confusing. It just mentions SLAT/EPT will be needed, but not VT-d, which might sound like 99 % of Qubes users have CPU compatible with the future Qubes 4.0. When counting Qubes 4.0, it does not sound like this, as non-VT-d-compatible machines seem to be more frequent. Not supporting VT-d in 4.0 is controversial, but it seems to be somehow justified. Moreover, it does not sound like a practical issue for me if it happens in 4.0. My main point is the communication: Please make this future incompatibility more visible. Without thinking about implementation of PVH, I would not have any idea that such issue might arise. All Qubes users should have a real chance to realize the adjusted hardware requirements soon enough. There is arguably some time before the last non-VT-d-requiring release (3.2?) gets unsupported (maybe an year), but one buying a new laptop considering Qubes compatibility should know (or have real chance to know) that fact today. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 3, 2016
Member
You're right. We'll update system requirements page soon. But we need to work out some details.
|
You're right. We'll update system requirements page soon. But we need to work out some details. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
andrewdavidwong
Aug 4, 2016
Member
The idea is to not support such hardware in Qubes 4.x, and for older hardware recommend (besides upgrading the hardware...) Qubes 3.x.
Does this mean changing our support period for prior versions? Currently it's "six months after each subsequent major or minor release." If so, this is another thing we should make sure to communicate clearly.
Does this mean changing our support period for prior versions? Currently it's "six months after each subsequent major or minor release." If so, this is another thing we should make sure to communicate clearly. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
rootkovska
Aug 4, 2016
Member
On Wed, Aug 03, 2016 at 01:55:21PM -0700, Marek Marczykowski-Górecki wrote:
Some major problem: it looks like populate-on-demand doesn't work - domain
crashes when started with memory < maxmem. This is a blocker for dynamic
memory management (qmemman). Later lowering memory assignment does work.
Shall we create a new ticket for tracking of this? Obvious problem with "VM must
start with maxmem" is that it might be hard to launch more then a few VMs.
j.
|
On Wed, Aug 03, 2016 at 01:55:21PM -0700, Marek Marczykowski-Górecki wrote:
Shall we create a new ticket for tracking of this? Obvious problem with "VM must j. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
v6ak
Aug 4, 2016
That's great. I am in favour of communicating this via blog/mailinglist/Twitter/etc.
v6ak
commented
Aug 4, 2016
|
That's great. I am in favour of communicating this via blog/mailinglist/Twitter/etc. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 16, 2016
Member
Based on current state of PVH in Xen, for Qubes 4.0 we'll go with standard HVM, then later switch to PVHv2 when ready.
|
Based on current state of PVH in Xen, for Qubes 4.0 we'll go with standard HVM, then later switch to PVHv2 when ready. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Jeeppler
Aug 16, 2016
@marmarek do you really want to switch to pure HVM? I think for Linux it would make sense to use PVHVM: A HVM which uses hardware virtualization extensions and PV driver for networking and disk I/O. (source: https://wiki.xen.org/wiki/Xen_Project_Software_Overview)
I mean the HVMlite implementation would add a way to directly boot into the kernel (source: http://fossies.org/linux/xen/docs/misc/hvmlite.markdown) which PVHVM does not offer.
Jeeppler
commented
Aug 16, 2016
|
@marmarek do you really want to switch to pure HVM? I think for Linux it would make sense to use PVHVM: A HVM which uses hardware virtualization extensions and PV driver for networking and disk I/O. (source: https://wiki.xen.org/wiki/Xen_Project_Software_Overview) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Yes, of course I meant HVM with PV drivers. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
v6ak
Aug 16, 2016
So, will stubdomains be needed? IIUC, PVHVM does not need this, while HVM+PV does.
And when there is a full HVM (e.g. a Windows VM) which needs a stubdomain, what domain type will it use? If it will run in a PV (like today's PV do), then the issue with PV security is solved only partially, especially when considering QEMU as insecure.
v6ak
commented
Aug 16, 2016
|
So, will stubdomains be needed? IIUC, PVHVM does not need this, while HVM+PV does. And when there is a full HVM (e.g. a Windows VM) which needs a stubdomain, what domain type will it use? If it will run in a PV (like today's PV do), then the issue with PV security is solved only partially, especially when considering QEMU as insecure. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
See linked discussion on xen-devel - in short PVH isn't usable yet. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 16, 2016
Member
If it will run in a PV (like today's PV do), then the issue with PV security is solved only partially, especially when considering QEMU as insecure.
Yes, that's unfortunately right.
Yes, that's unfortunately right. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Jeeppler
Aug 17, 2016
HVMlite would be PVHv2 according to the Xen wiki, which is different from PVHVM. PVMHVM needs emulation to boot, whereas PVHv2 can boot the kernel paravirtualized without any emulation.
HVM+PV is only for Windows or other closed source guests. HVM+PV guest need to emulate timers, interrupts and spinlocks. (source: http://www.slideshare.net/xen_com_mgr/6-stefano-spvhvm slide 12)
So, because CVE-2016-6258 is a "fatal" (http://betanews.com/2016/07/28/major-security-vulnerability-xen-hypervisor/) bug in PV guests you would like first switch to PVHVM guests and as soon as PVH aka HVMlite or PVHv2 is usable switch to it?
Do you want to wait to release Qubes 4.0 till PVH is usable or do you want to relase Qubes 4.0 in "Winter" 2016 with PVHVM guests?
Jeeppler
commented
Aug 17, 2016
|
HVMlite would be PVHv2 according to the Xen wiki, which is different from PVHVM. PVMHVM needs emulation to boot, whereas PVHv2 can boot the kernel paravirtualized without any emulation. HVM+PV is only for Windows or other closed source guests. HVM+PV guest need to emulate timers, interrupts and spinlocks. (source: http://www.slideshare.net/xen_com_mgr/6-stefano-spvhvm slide 12) So, because CVE-2016-6258 is a "fatal" (http://betanews.com/2016/07/28/major-security-vulnerability-xen-hypervisor/) bug in PV guests you would like first switch to PVHVM guests and as soon as PVH aka HVMlite or PVHv2 is usable switch to it? Do you want to wait to release Qubes 4.0 till PVH is usable or do you want to relase Qubes 4.0 in "Winter" 2016 with PVHVM guests? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Aug 17, 2016
Member
So, because CVE-2016-6258 is a "fatal" (http://betanews.com/2016/07/28/major-security-vulnerability-xen-hypervisor/) bug in PV guests you would like first switch to PVHVM guests and as soon as PVH aka HVMlite or PVHv2 is usable switch to it?
Yes (fixed typo in my previous comment). This would still expose PV interface to Qemu (as stubdomain is PV).
Do you want to wait to release Qubes 4.0 till PVH is usable or do you want to relase Qubes 4.0 in "Winter" 2016 with PVHVM guests?
It depends on how work on PVHv2 will go. If it will go in for Xen 4.8 (feature freeze in September, release in December), we may consider delaying Qubes 4.0. Otherwise will go with band-aid of using PV only for stubdomain.
It would be interesting to check if qemu (with the whole stubdomain) in PVHVM could be killed just after booting Linux. If so, that would be much better.
Yes (fixed typo in my previous comment). This would still expose PV interface to Qemu (as stubdomain is PV).
It depends on how work on PVHv2 will go. If it will go in for Xen 4.8 (feature freeze in September, release in December), we may consider delaying Qubes 4.0. Otherwise will go with band-aid of using PV only for stubdomain. It would be interesting to check if qemu (with the whole stubdomain) in PVHVM could be killed just after booting Linux. If so, that would be much better. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
v6ak
Aug 17, 2016
Aha, I did not read it in depth, so my original assumption was that just PVH is unusable. If also PVHVM is unusable, then I probably got it:
- All domains (except dom0 and stubdomains) are HVMs. Some are traditional HVMs (e.g. today's Windows VM), others (i.e., those that were PVs in the past) have just a different boot. (Maybe some R/O media with kernel.)
- All domains (except dom0 and stubdomains) have a corresponding stubdomain, which is PV.
- Implication # 0: higher HW requirement, discussed above
- Implication # 1: Some memory overhead, probably under 50 MiB per VM
- Implication # 2: PV-related vulnerabilities can be used only together with QEMU/stubdomain vulnerabilities.
- Implication # 3: QEMU bugs will probably be taken as more serious than they used to.
Not optimal, but you seem to do the best you can do with the current state of Xen and the corresponsing ecosystem.
v6ak
commented
Aug 17, 2016
|
Aha, I did not read it in depth, so my original assumption was that just PVH is unusable. If also PVHVM is unusable, then I probably got it:
Not optimal, but you seem to do the best you can do with the current state of Xen and the corresponsing ecosystem. |
marmarek
added
the
release-notes
label
Oct 13, 2016
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Jan 18, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Feb 6, 2017
Member
PVHv2 status update (after talking in person to Xen people at FOSDEM): there is still slight disagreement on details of Linux support for PVHv2 (AFAIR about CPU hotplug). Should be resolved and implemented this year, but probably will take more than 1-2 months. This is all about PVHv2 without PCI passthrough, which is another story. Which means there wont be PVHv2 Linux VMs in Qubes 4.0.
|
PVHv2 status update (after talking in person to Xen people at FOSDEM): there is still slight disagreement on details of Linux support for PVHv2 (AFAIR about CPU hotplug). Should be resolved and implemented this year, but probably will take more than 1-2 months. This is all about PVHv2 without PCI passthrough, which is another story. Which means there wont be PVHv2 Linux VMs in Qubes 4.0. |
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Feb 14, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Jeeppler
Feb 22, 2017
Phoronix reported that Xen developer submitted patches for PVHv2 (formerly known as HVMLite) to the 4.11 kernel: Xen Changes For Linux 4.11: Lands PVHv2 Guest Support
Here is the Linux Kernel-Archive pull request message: [GIT PULL] xen: features and fixes for 4.11-rc0
Jeeppler
commented
Feb 22, 2017
|
Phoronix reported that Xen developer submitted patches for PVHv2 (formerly known as HVMLite) to the 4.11 kernel: Xen Changes For Linux 4.11: Lands PVHv2 Guest Support Here is the Linux Kernel-Archive pull request message: [GIT PULL] xen: features and fixes for 4.11-rc0 |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Good to know, thanks! |
qubesos-bot
referenced this issue
in QubesOS/updates-status
Apr 2, 2017
Closed
core-libvirt v3.1.0-1 (r4.0) #24
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
May 16, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Jun 3, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Jun 3, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Jun 3, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Jun 3, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Jun 3, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Jun 3, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Jun 5, 2017
added a commit
to marmarek/qubes-core-admin
that referenced
this issue
Jun 5, 2017
qubesos-bot
referenced this issue
in QubesOS/updates-status
Jul 4, 2017
Closed
core-admin v4.0.1 (r4.0) #100
marmarek
referenced this issue
Jul 17, 2017
Closed
Change VM 'hvm' property into 'virt_mode' property #2912
added a commit
to marmarek/qubes-mgmt-salt-dom0-virtual-machines
that referenced
this issue
Oct 8, 2017
qubesos-bot
referenced this issue
in QubesOS/updates-status
Oct 8, 2017
Closed
mgmt-salt-dom0-virtual-machines v4.0.6 (r4.0) #252
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Oct 23, 2017
Member
Adjustments to the Linux GUI agent (get rid of u2mfn module, report gust mfns), and the GUI daemon (we decided we still want to use xc_map_foreign_page for the time being)
Actually, it turned out this step isn't needed. But it is still nice to have.
Actually, it turned out this step isn't needed. But it is still nice to have. |
marmarek
closed this
in
QubesOS/qubes-core-admin@4ff5387
Jan 15, 2018
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesos-bot
Jan 18, 2018
Automated announcement from builder-github
The package qubes-core-dom0-4.0.16-1.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:
sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing
qubesos-bot
commented
Jan 18, 2018
|
Automated announcement from builder-github The package
|
qubesos-bot
added
the
r4.0-dom0-cur-test
label
Jan 18, 2018
qubesos-bot
referenced this issue
in QubesOS/updates-status
Jan 18, 2018
Closed
core-admin v4.0.16 (r4.0) #365
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Jeeppler
Jan 21, 2018
@marmarek what is the final decision? Are you switching entirely to HVM or is PVH still an option?
Jeeppler
commented
Jan 21, 2018
|
@marmarek what is the final decision? Are you switching entirely to HVM or is PVH still an option? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
marmarek
Jan 21, 2018
Member
Where possible (no PCI devices, Linux >= 4.11), we're switching to PVH.
|
Where possible (no PCI devices, Linux >= 4.11), we're switching to PVH. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
qubesos-bot
Feb 6, 2018
Automated announcement from builder-github
The package qubes-core-dom0-4.0.21-1.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:
sudo qubes-dom0-update
Or update dom0 via Qubes Manager.
qubesos-bot
commented
Feb 6, 2018
|
Automated announcement from builder-github The package
Or update dom0 via Qubes Manager. |
rootkovska commentedJul 20, 2016
•
edited by marmarek
Edited 1 time
-
marmarek
edited Oct 23, 2017 (most recent)
For Qubes 4 we want to move away from using PV as the default method of virtualization in favor of using hw-aided (i.e. SLAT-enforced) virtualization, which currently Xen offers as PVH. The main reason for this is security. We believe SLAT should be less buggy than PV memory virtualization, as e.g. XSA 148 has shown a few months ago. Today most platforms should support SLAT, which was not the case 6 years ago when we originally chose PV over HVM. HVM without SLAT requires clunky Shadow Page Tables virtualization, arguably even more complex and error-prone than PV virtualization.
This ticket serves as a central place to track progress of this task, which should include: