New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to PVH #2185

Closed
rootkovska opened this Issue Jul 20, 2016 · 29 comments

Comments

@rootkovska
Member

rootkovska commented Jul 20, 2016

For Qubes 4 we want to move away from using PV as the default method of virtualization in favor of using hw-aided (i.e. SLAT-enforced) virtualization, which currently Xen offers as PVH. The main reason for this is security. We believe SLAT should be less buggy than PV memory virtualization, as e.g. XSA 148 has shown a few months ago. Today most platforms should support SLAT, which was not the case 6 years ago when we originally chose PV over HVM. HVM without SLAT requires clunky Shadow Page Tables virtualization, arguably even more complex and error-prone than PV virtualization.

This ticket serves as a central place to track progress of this task, which should include:

  • Enabling (PV)GRUB (not PyGRUB!) support in our template-builders
  • Adjustments to the Linux GUI agent (get rid of u2mfn module, report gust mfns), and the GUI daemon (we decided we still want to use xc_map_foreign_page for the time being)
  • Modify default Xen VM config generated by core (potentially also: apply patch to libvirt to support PVH?)
  • Installer/firstboot: detect if the CPU has SLAT support (which also implies VT-x), and refuse to install otherwise (possibly recommend Qubes 3.x stable branch instead).
@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 20, 2016

Member

Possible problems: PCI passthrough (as currently broken on HVM - #1659)

Member

marmarek commented Jul 20, 2016

Possible problems: PCI passthrough (as currently broken on HVM - #1659)

@Jeeppler

This comment has been minimized.

Show comment
Hide comment
@Jeeppler

Jeeppler Jul 26, 2016

Does SLAT and PVH improve the guest VM performance (boot time/runtime)?

Does SLAT and PVH improve the guest VM performance (boot time/runtime)?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jul 26, 2016

Member

Additionally:

  • have qubes-hcl-report (also in older releases) check for SLAT
Member

marmarek commented Jul 26, 2016

Additionally:

  • have qubes-hcl-report (also in older releases) check for SLAT

marmarek added a commit to marmarek/qubes-core-libvirt that referenced this issue Jul 29, 2016

marmarek added a commit to marmarek/qubes-core-libvirt that referenced this issue Jul 29, 2016

@marmarek marmarek referenced this issue Jul 31, 2016

Closed

Enhance qubes-hcl-report tool #2214

1 of 3 tasks complete
@v6ak

This comment has been minimized.

Show comment
Hide comment
@v6ak

v6ak Aug 3, 2016

On a non-VT-d computer, does PVH support PCI passthrough?

(If not, I think it would be safe to use PV fallback, as VMs with PCI passthrough are already somehow privileged on non-VT-d computers.)

v6ak commented Aug 3, 2016

On a non-VT-d computer, does PVH support PCI passthrough?

(If not, I think it would be safe to use PV fallback, as VMs with PCI passthrough are already somehow privileged on non-VT-d computers.)

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 3, 2016

Member

On Wed, Aug 03, 2016 at 11:10:50AM -0700, Vít Šesták wrote:

On a non-VT-d computer, does PVH support PCI passthrough?

(If not, I think it would be safe to use PV fallback, as VMs with PCI passthrough are already somehow privileged on non-VT-d computers.)

The idea is to not support such hardware in Qubes 4.x, and for older
hardware recommend (besides upgrading the hardware...) Qubes 3.x.
Currently, because of large spectrum of supported hardware, statement "I
use Qubes OS" may mean very different security level. This is
specifically bad for less technical users, who can't reason about
implications of not having particular feature.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

Member

marmarek commented Aug 3, 2016

On Wed, Aug 03, 2016 at 11:10:50AM -0700, Vít Šesták wrote:

On a non-VT-d computer, does PVH support PCI passthrough?

(If not, I think it would be safe to use PV fallback, as VMs with PCI passthrough are already somehow privileged on non-VT-d computers.)

The idea is to not support such hardware in Qubes 4.x, and for older
hardware recommend (besides upgrading the hardware...) Qubes 3.x.
Currently, because of large spectrum of supported hardware, statement "I
use Qubes OS" may mean very different security level. This is
specifically bad for less technical users, who can't reason about
implications of not having particular feature.

Best Regards,
Marek Marczykowski-Górecki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 3, 2016

Member

Some major problem: it looks like populate-on-demand doesn't work - domain crashes when started with memory < maxmem. This is a blocker for dynamic memory management (qmemman).
Later lowering memory assignment does work.

Member

marmarek commented Aug 3, 2016

Some major problem: it looks like populate-on-demand doesn't work - domain crashes when started with memory < maxmem. This is a blocker for dynamic memory management (qmemman).
Later lowering memory assignment does work.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 3, 2016

Member

Does SLAT and PVH improve the guest VM performance (boot time/runtime)?

Theoretically yes, but haven't tested it yet.

Member

marmarek commented Aug 3, 2016

Does SLAT and PVH improve the guest VM performance (boot time/runtime)?

Theoretically yes, but haven't tested it yet.

@v6ak

This comment has been minimized.

Show comment
Hide comment
@v6ak

v6ak Aug 3, 2016

Thanks for the info. I understand the reason, especially with the limited resources for development. The QSB-24 is, however, confusing. It just mentions SLAT/EPT will be needed, but not VT-d, which might sound like 99 % of Qubes users have CPU compatible with the future Qubes 4.0. When counting Qubes 4.0, it does not sound like this, as non-VT-d-compatible machines seem to be more frequent.

Not supporting VT-d in 4.0 is controversial, but it seems to be somehow justified. Moreover, it does not sound like a practical issue for me if it happens in 4.0. My main point is the communication: Please make this future incompatibility more visible. Without thinking about implementation of PVH, I would not have any idea that such issue might arise. All Qubes users should have a real chance to realize the adjusted hardware requirements soon enough. There is arguably some time before the last non-VT-d-requiring release (3.2?) gets unsupported (maybe an year), but one buying a new laptop considering Qubes compatibility should know (or have real chance to know) that fact today.

v6ak commented Aug 3, 2016

Thanks for the info. I understand the reason, especially with the limited resources for development. The QSB-24 is, however, confusing. It just mentions SLAT/EPT will be needed, but not VT-d, which might sound like 99 % of Qubes users have CPU compatible with the future Qubes 4.0. When counting Qubes 4.0, it does not sound like this, as non-VT-d-compatible machines seem to be more frequent.

Not supporting VT-d in 4.0 is controversial, but it seems to be somehow justified. Moreover, it does not sound like a practical issue for me if it happens in 4.0. My main point is the communication: Please make this future incompatibility more visible. Without thinking about implementation of PVH, I would not have any idea that such issue might arise. All Qubes users should have a real chance to realize the adjusted hardware requirements soon enough. There is arguably some time before the last non-VT-d-requiring release (3.2?) gets unsupported (maybe an year), but one buying a new laptop considering Qubes compatibility should know (or have real chance to know) that fact today.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 3, 2016

Member

You're right. We'll update system requirements page soon. But we need to work out some details.

Member

marmarek commented Aug 3, 2016

You're right. We'll update system requirements page soon. But we need to work out some details.

@andrewdavidwong

This comment has been minimized.

Show comment
Hide comment
@andrewdavidwong

andrewdavidwong Aug 4, 2016

Member

The idea is to not support such hardware in Qubes 4.x, and for older hardware recommend (besides upgrading the hardware...) Qubes 3.x.

Does this mean changing our support period for prior versions? Currently it's "six months after each subsequent major or minor release." If so, this is another thing we should make sure to communicate clearly.

Member

andrewdavidwong commented Aug 4, 2016

The idea is to not support such hardware in Qubes 4.x, and for older hardware recommend (besides upgrading the hardware...) Qubes 3.x.

Does this mean changing our support period for prior versions? Currently it's "six months after each subsequent major or minor release." If so, this is another thing we should make sure to communicate clearly.

@rootkovska

This comment has been minimized.

Show comment
Hide comment
@rootkovska

rootkovska Aug 4, 2016

Member

On Wed, Aug 03, 2016 at 01:55:21PM -0700, Marek Marczykowski-Górecki wrote:

Some major problem: it looks like populate-on-demand doesn't work - domain
crashes when started with memory < maxmem. This is a blocker for dynamic
memory management (qmemman). Later lowering memory assignment does work.

Shall we create a new ticket for tracking of this? Obvious problem with "VM must
start with maxmem" is that it might be hard to launch more then a few VMs.

j.

Member

rootkovska commented Aug 4, 2016

On Wed, Aug 03, 2016 at 01:55:21PM -0700, Marek Marczykowski-Górecki wrote:

Some major problem: it looks like populate-on-demand doesn't work - domain
crashes when started with memory < maxmem. This is a blocker for dynamic
memory management (qmemman). Later lowering memory assignment does work.

Shall we create a new ticket for tracking of this? Obvious problem with "VM must
start with maxmem" is that it might be hard to launch more then a few VMs.

j.

@v6ak

This comment has been minimized.

Show comment
Hide comment
@v6ak

v6ak Aug 4, 2016

That's great. I am in favour of communicating this via blog/mailinglist/Twitter/etc.

v6ak commented Aug 4, 2016

That's great. I am in favour of communicating this via blog/mailinglist/Twitter/etc.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 16, 2016

Member

Based on current state of PVH in Xen, for Qubes 4.0 we'll go with standard HVM, then later switch to PVHv2 when ready.

Member

marmarek commented Aug 16, 2016

Based on current state of PVH in Xen, for Qubes 4.0 we'll go with standard HVM, then later switch to PVHv2 when ready.

@Jeeppler

This comment has been minimized.

Show comment
Hide comment
@Jeeppler

Jeeppler Aug 16, 2016

@marmarek do you really want to switch to pure HVM? I think for Linux it would make sense to use PVHVM: A HVM which uses hardware virtualization extensions and PV driver for networking and disk I/O. (source: https://wiki.xen.org/wiki/Xen_Project_Software_Overview)
I mean the HVMlite implementation would add a way to directly boot into the kernel (source: http://fossies.org/linux/xen/docs/misc/hvmlite.markdown) which PVHVM does not offer.

@marmarek do you really want to switch to pure HVM? I think for Linux it would make sense to use PVHVM: A HVM which uses hardware virtualization extensions and PV driver for networking and disk I/O. (source: https://wiki.xen.org/wiki/Xen_Project_Software_Overview)
I mean the HVMlite implementation would add a way to directly boot into the kernel (source: http://fossies.org/linux/xen/docs/misc/hvmlite.markdown) which PVHVM does not offer.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 16, 2016

Member

Yes, of course I meant HVM with PV drivers.

Member

marmarek commented Aug 16, 2016

Yes, of course I meant HVM with PV drivers.

@v6ak

This comment has been minimized.

Show comment
Hide comment
@v6ak

v6ak Aug 16, 2016

So, will stubdomains be needed? IIUC, PVHVM does not need this, while HVM+PV does.

And when there is a full HVM (e.g. a Windows VM) which needs a stubdomain, what domain type will it use? If it will run in a PV (like today's PV do), then the issue with PV security is solved only partially, especially when considering QEMU as insecure.

v6ak commented Aug 16, 2016

So, will stubdomains be needed? IIUC, PVHVM does not need this, while HVM+PV does.

And when there is a full HVM (e.g. a Windows VM) which needs a stubdomain, what domain type will it use? If it will run in a PV (like today's PV do), then the issue with PV security is solved only partially, especially when considering QEMU as insecure.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 16, 2016

Member

See linked discussion on xen-devel - in short PVH isn't usable yet.

Member

marmarek commented Aug 16, 2016

See linked discussion on xen-devel - in short PVH isn't usable yet.

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 16, 2016

Member

If it will run in a PV (like today's PV do), then the issue with PV security is solved only partially, especially when considering QEMU as insecure.

Yes, that's unfortunately right.

Member

marmarek commented Aug 16, 2016

If it will run in a PV (like today's PV do), then the issue with PV security is solved only partially, especially when considering QEMU as insecure.

Yes, that's unfortunately right.

@marmarek marmarek referenced this issue Aug 16, 2016

Closed

DispVM changes in Qubes 4.0 #2253

5 of 5 tasks complete
@Jeeppler

This comment has been minimized.

Show comment
Hide comment
@Jeeppler

Jeeppler Aug 17, 2016

HVMlite would be PVHv2 according to the Xen wiki, which is different from PVHVM. PVMHVM needs emulation to boot, whereas PVHv2 can boot the kernel paravirtualized without any emulation.

HVM+PV is only for Windows or other closed source guests. HVM+PV guest need to emulate timers, interrupts and spinlocks. (source: http://www.slideshare.net/xen_com_mgr/6-stefano-spvhvm slide 12)

So, because CVE-2016-6258 is a "fatal" (http://betanews.com/2016/07/28/major-security-vulnerability-xen-hypervisor/) bug in PV guests you would like first switch to PVHVM guests and as soon as PVH aka HVMlite or PVHv2 is usable switch to it?

Do you want to wait to release Qubes 4.0 till PVH is usable or do you want to relase Qubes 4.0 in "Winter" 2016 with PVHVM guests?

HVMlite would be PVHv2 according to the Xen wiki, which is different from PVHVM. PVMHVM needs emulation to boot, whereas PVHv2 can boot the kernel paravirtualized without any emulation.

HVM+PV is only for Windows or other closed source guests. HVM+PV guest need to emulate timers, interrupts and spinlocks. (source: http://www.slideshare.net/xen_com_mgr/6-stefano-spvhvm slide 12)

So, because CVE-2016-6258 is a "fatal" (http://betanews.com/2016/07/28/major-security-vulnerability-xen-hypervisor/) bug in PV guests you would like first switch to PVHVM guests and as soon as PVH aka HVMlite or PVHv2 is usable switch to it?

Do you want to wait to release Qubes 4.0 till PVH is usable or do you want to relase Qubes 4.0 in "Winter" 2016 with PVHVM guests?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Aug 17, 2016

Member

So, because CVE-2016-6258 is a "fatal" (http://betanews.com/2016/07/28/major-security-vulnerability-xen-hypervisor/) bug in PV guests you would like first switch to PVHVM guests and as soon as PVH aka HVMlite or PVHv2 is usable switch to it?

Yes (fixed typo in my previous comment). This would still expose PV interface to Qemu (as stubdomain is PV).

Do you want to wait to release Qubes 4.0 till PVH is usable or do you want to relase Qubes 4.0 in "Winter" 2016 with PVHVM guests?

It depends on how work on PVHv2 will go. If it will go in for Xen 4.8 (feature freeze in September, release in December), we may consider delaying Qubes 4.0. Otherwise will go with band-aid of using PV only for stubdomain.

It would be interesting to check if qemu (with the whole stubdomain) in PVHVM could be killed just after booting Linux. If so, that would be much better.

Member

marmarek commented Aug 17, 2016

So, because CVE-2016-6258 is a "fatal" (http://betanews.com/2016/07/28/major-security-vulnerability-xen-hypervisor/) bug in PV guests you would like first switch to PVHVM guests and as soon as PVH aka HVMlite or PVHv2 is usable switch to it?

Yes (fixed typo in my previous comment). This would still expose PV interface to Qemu (as stubdomain is PV).

Do you want to wait to release Qubes 4.0 till PVH is usable or do you want to relase Qubes 4.0 in "Winter" 2016 with PVHVM guests?

It depends on how work on PVHv2 will go. If it will go in for Xen 4.8 (feature freeze in September, release in December), we may consider delaying Qubes 4.0. Otherwise will go with band-aid of using PV only for stubdomain.

It would be interesting to check if qemu (with the whole stubdomain) in PVHVM could be killed just after booting Linux. If so, that would be much better.

@v6ak

This comment has been minimized.

Show comment
Hide comment
@v6ak

v6ak Aug 17, 2016

Aha, I did not read it in depth, so my original assumption was that just PVH is unusable. If also PVHVM is unusable, then I probably got it:

  • All domains (except dom0 and stubdomains) are HVMs. Some are traditional HVMs (e.g. today's Windows VM), others (i.e., those that were PVs in the past) have just a different boot. (Maybe some R/O media with kernel.)
  • All domains (except dom0 and stubdomains) have a corresponding stubdomain, which is PV.
  • Implication # 0: higher HW requirement, discussed above
  • Implication # 1: Some memory overhead, probably under 50 MiB per VM
  • Implication # 2: PV-related vulnerabilities can be used only together with QEMU/stubdomain vulnerabilities.
  • Implication # 3: QEMU bugs will probably be taken as more serious than they used to.

Not optimal, but you seem to do the best you can do with the current state of Xen and the corresponsing ecosystem.

v6ak commented Aug 17, 2016

Aha, I did not read it in depth, so my original assumption was that just PVH is unusable. If also PVHVM is unusable, then I probably got it:

  • All domains (except dom0 and stubdomains) are HVMs. Some are traditional HVMs (e.g. today's Windows VM), others (i.e., those that were PVs in the past) have just a different boot. (Maybe some R/O media with kernel.)
  • All domains (except dom0 and stubdomains) have a corresponding stubdomain, which is PV.
  • Implication # 0: higher HW requirement, discussed above
  • Implication # 1: Some memory overhead, probably under 50 MiB per VM
  • Implication # 2: PV-related vulnerabilities can be used only together with QEMU/stubdomain vulnerabilities.
  • Implication # 3: QEMU bugs will probably be taken as more serious than they used to.

Not optimal, but you seem to do the best you can do with the current state of Xen and the corresponsing ecosystem.

@marmarek marmarek referenced this issue Jan 14, 2017

Closed

Install grub in template's root.img #2577

0 of 5 tasks complete

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jan 18, 2017

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Feb 6, 2017

Member

PVHv2 status update (after talking in person to Xen people at FOSDEM): there is still slight disagreement on details of Linux support for PVHv2 (AFAIR about CPU hotplug). Should be resolved and implemented this year, but probably will take more than 1-2 months. This is all about PVHv2 without PCI passthrough, which is another story. Which means there wont be PVHv2 Linux VMs in Qubes 4.0.

Member

marmarek commented Feb 6, 2017

PVHv2 status update (after talking in person to Xen people at FOSDEM): there is still slight disagreement on details of Linux support for PVHv2 (AFAIR about CPU hotplug). Should be resolved and implemented this year, but probably will take more than 1-2 months. This is all about PVHv2 without PCI passthrough, which is another story. Which means there wont be PVHv2 Linux VMs in Qubes 4.0.

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Feb 14, 2017

@Jeeppler

This comment has been minimized.

Show comment
Hide comment
@Jeeppler

Jeeppler Feb 22, 2017

Phoronix reported that Xen developer submitted patches for PVHv2 (formerly known as HVMLite) to the 4.11 kernel: Xen Changes For Linux 4.11: Lands PVHv2 Guest Support

Here is the Linux Kernel-Archive pull request message: [GIT PULL] xen: features and fixes for 4.11-rc0

Phoronix reported that Xen developer submitted patches for PVHv2 (formerly known as HVMLite) to the 4.11 kernel: Xen Changes For Linux 4.11: Lands PVHv2 Guest Support

Here is the Linux Kernel-Archive pull request message: [GIT PULL] xen: features and fixes for 4.11-rc0

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Feb 22, 2017

Member

Good to know, thanks!

Member

marmarek commented Feb 22, 2017

Good to know, thanks!

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Apr 2, 2017

Closed

core-libvirt v3.1.0-1 (r4.0) #24

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue May 16, 2017

Enable linux-stubdom by default
Also, make it possible to set default on a template for its VMs.

QubesOS/qubes-issues#2185

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jun 3, 2017

rpm: depend on linux-stubdom package
Install both stubdom implementations: mini-os one (xen-hvm) and linux
one (xen-hvm-stubdom-linux).

QubesOS/qubes-issues#2185

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jun 3, 2017

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jun 3, 2017

rpm: depend on linux-stubdom package
Install both stubdom implementations: mini-os one (xen-hvm) and linux
one (xen-hvm-stubdom-linux).

QubesOS/qubes-issues#2185

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jun 3, 2017

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jun 3, 2017

rpm: depend on linux-stubdom package
Install both stubdom implementations: mini-os one (xen-hvm) and linux
one (xen-hvm-stubdom-linux).

QubesOS/qubes-issues#2185

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jun 3, 2017

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jun 5, 2017

rpm: depend on linux-stubdom package
Install both stubdom implementations: mini-os one (xen-hvm) and linux
one (xen-hvm-stubdom-linux).

QubesOS/qubes-issues#2185

marmarek added a commit to marmarek/qubes-core-admin that referenced this issue Jun 5, 2017

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Jul 4, 2017

Closed

core-admin v4.0.1 (r4.0) #100

marmarek added a commit to marmarek/qubes-mgmt-salt-dom0-virtual-machines that referenced this issue Oct 8, 2017

Switch sys-net and sys-usb back to HVM
Since MSI support is fixed/implemented, HVM is useable for hardware
handling domains.

QubesOS/qubes-issues#2849
QubesOS/qubes-issues#2185

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Oct 8, 2017

Closed

mgmt-salt-dom0-virtual-machines v4.0.6 (r4.0) #252

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Oct 23, 2017

Member

Adjustments to the Linux GUI agent (get rid of u2mfn module, report gust mfns), and the GUI daemon (we decided we still want to use xc_map_foreign_page for the time being)

Actually, it turned out this step isn't needed. But it is still nice to have.

Member

marmarek commented Oct 23, 2017

Adjustments to the Linux GUI agent (get rid of u2mfn module, report gust mfns), and the GUI daemon (we decided we still want to use xc_map_foreign_page for the time being)

Actually, it turned out this step isn't needed. But it is still nice to have.

@qubesos-bot

This comment has been minimized.

Show comment
Hide comment
@qubesos-bot

qubesos-bot Jan 18, 2018

Automated announcement from builder-github

The package qubes-core-dom0-4.0.16-1.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

Automated announcement from builder-github

The package qubes-core-dom0-4.0.16-1.fc25 has been pushed to the r4.0 testing repository for dom0.
To test this update, please install it with the following command:

sudo qubes-dom0-update --enablerepo=qubes-dom0-current-testing

Changes included in this update

@qubesos-bot qubesos-bot referenced this issue in QubesOS/updates-status Jan 18, 2018

Closed

core-admin v4.0.16 (r4.0) #365

@Jeeppler

This comment has been minimized.

Show comment
Hide comment
@Jeeppler

Jeeppler Jan 21, 2018

@marmarek what is the final decision? Are you switching entirely to HVM or is PVH still an option?

@marmarek what is the final decision? Are you switching entirely to HVM or is PVH still an option?

@marmarek

This comment has been minimized.

Show comment
Hide comment
@marmarek

marmarek Jan 21, 2018

Member

Where possible (no PCI devices, Linux >= 4.11), we're switching to PVH.

Member

marmarek commented Jan 21, 2018

Where possible (no PCI devices, Linux >= 4.11), we're switching to PVH.

@qubesos-bot

This comment has been minimized.

Show comment
Hide comment
@qubesos-bot

qubesos-bot Feb 6, 2018

Automated announcement from builder-github

The package qubes-core-dom0-4.0.21-1.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Automated announcement from builder-github

The package qubes-core-dom0-4.0.21-1.fc25 has been pushed to the r4.0 stable repository for dom0.
To install this update, please use the standard update command:

sudo qubes-dom0-update

Or update dom0 via Qubes Manager.

Changes included in this update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment