Permalink
Cannot retrieve contributors at this time
Name already in use
A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
qubes-secpack/QSBs/qsb-029-2017.txt
Go to fileThis commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
167 lines (129 sloc)
6.66 KB
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| ---===[ Qubes Security Bulletin #29 ]===--- | |
| April 4, 2017 | |
| Critical Xen bug in PV memory virtualization code (XSA-212) | |
| Quick Summary | |
| ============== | |
| The Xen Security Team has discovered a serious security bug (XSA-212) | |
| in the hypervisor code handling memory virtualization for | |
| paravirtualized (PV) VMs [1]: | |
| | The XSA-29 fix introduced an insufficient check on XENMEM_exchange | |
| | input, allowing the caller to drive hypervisor memory accesses outside | |
| | of the guest provided input/output arrays. | |
| | | |
| | A malicious or buggy 64-bit PV guest may be able to access all of | |
| | system memory, allowing for all of privilege escalation, host crashes, | |
| | and information leaks. | |
| An attacker who exploits this bug can break Qubes-provided isolation. | |
| This means that if an attacker has already exploited another | |
| vulnerability, e.g. in a Web browser or networking or USB stack, then | |
| the attacker would be able to compromise a whole Qubes system. | |
| Description of the bug | |
| ======================= | |
| Arguments of XENMEM_exchange hypercall contains array of page numbers | |
| to be exchanged and array for exchanged page numbers. Validation of | |
| those arrays (supposedly living in guest memory) assumes a certain | |
| memory layout -- that Xen memory is just above non-canonical addresses. | |
| It also assumes that array accesses are sequential starting from the | |
| beginning of the array. With this assumption, validation code checks | |
| only the address of the first array element and assumes (thanks to | |
| sequential access) that further code will hit a non-canonical address, | |
| resulting in page fault, before reaching Xen memory. | |
| Relevant part of xen/include/asm-x86/x86_64/uaccess.h: | |
| /* | |
| * Valid if in +ve half of 48-bit address space, or above Xen-reserved area. | |
| * This is also valid for range checks (addr, addr+size). As long as the | |
| * start address is outside the Xen-reserved area, sequential accesses | |
| * (starting at addr) will hit a non-canonical address (and thus fault) | |
| * before ever reaching VIRT_START. | |
| */ | |
| #define __addr_ok(addr) \ | |
| (((unsigned long)(addr) < (1UL<<47)) || \ | |
| ((unsigned long)(addr) >= HYPERVISOR_VIRT_END)) | |
| #define access_ok(addr, size) \ | |
| (__addr_ok(addr) || is_compat_arg_xlat_range(addr, size)) | |
| #define array_access_ok(addr, count, size) \ | |
| (access_ok(addr, (count)*(size))) | |
| Unfortunately, this assumption is false in XENMEM_exchange. The caller | |
| of this hypercall has full control over the place at which array | |
| processing starts (using the exch.nr_exchanged argument). This means | |
| that the caller can trick the Xen hypervisor into reading input page | |
| numbers from the Xen memory area or write exchanged page numbers into | |
| the Xen memory area. The attacker does not directly control values | |
| written there, but using, for example, balloon driver, can influence | |
| what pages will be available in the system and later chosen for | |
| allocation by XENMEM_exchange. | |
| This means that a sufficiently determined attacker can use this bug to | |
| write arbitrary data into arbitrary places in Xen memory and thereby | |
| take full control of the system. | |
| Discussion | |
| =========== | |
| This is another bug resulting from the overly-complex memory | |
| virtualization required for PV in Xen. As we announced last year [5], | |
| the upcoming Qubes OS 4.0 will no longer use PV. Instead, we will be | |
| switching to HVM-based virtualization: | |
| | One of the most important security improvements that we plan to | |
| | introduce with the release of Qubes 4 is to ditch paravirtualization | |
| | (PV) technology and replace it with hardware-enforced memory | |
| | virtualization, which recent processors have made possible thanks to | |
| | so-called Second Level Address Translation (SLAT), also known as EPT | |
| | in Intel parlance. SLAT (EPT) is an extension to Intel VT-x | |
| | virtualization, which originally was capable of only CPU | |
| | virtualization but not memory virtualization and hence required a | |
| | complex Shadow Page Tables approach (which we believed back then was | |
| | actually less attractive than the PV approach). We hope that embracing | |
| | SLAT-based memory virtualization will allow us to prevent disastrous | |
| | security bugs, such as the infamous XSA 148, publicly disclosed in | |
| | October of last year, which unlike many other major Xen bugs | |
| | regrettably did affect Qubes OS. Consequently, we will be requiring | |
| | SLAT support of all certified hardware for Qubes OS 4 and later. | |
| At the same time, we would like to point out that the security of Qubes | |
| OS has so far been affected by less than 10% of publicly disclosed Xen | |
| bugs, as tracked by the recently created Xen Security Advisory (XSA) | |
| Tracker. [6] | |
| Availability of patches | |
| ======================== | |
| Patched packages will be built and uploaded to the security-testing | |
| repository shortly after this advisory is published. We have recently | |
| implemented and published the details of a new, transparent build | |
| infrastructure. [3] In this new infrastructure, the source code for | |
| packages is pushed to a public repository, and logs from the build | |
| process are also publicly published. However, the Xen security policy | |
| does not permit us to make this data public until after the XSA-212 | |
| embargo has been lifted. [4] While we have already privately built | |
| and tested these packages, we must wait until the embargo has been | |
| lifted before transparently building the public packages using our | |
| new infrastructure. | |
| Thanks to the new infrastructure, it's also much easier for us handle | |
| updates. Because of this, we've decided to release fixed packages | |
| for Qubes OS 3.1, even though it recently reached EOL. Both Qubes OS | |
| 3.1 and Qubes OS 3.2 use the same Xen version, so this requires no | |
| additional work. | |
| Patching | |
| ========= | |
| The specific packages that resolve the problem discussed in this | |
| bulletin will be uploaded to the security-testing repository: | |
| For Qubes 3.1 and 3.2: | |
| * Xen packages, version 4.6.4-26 | |
| The packages are to be installed in Dom0 via the qubes-dom0-update | |
| command or via the Qubes VM Manager. | |
| A system restart will be required afterwards. | |
| If you use Anti Evil Maid, you will need to reseal your secret | |
| passphrase to new PCR values, as PCR18 will change due to the new | |
| xen.gz binary. | |
| These packages will migrate to the current (stable) repository over | |
| the coming days after being tested by the community. | |
| Credits | |
| ======== | |
| This issue was discovered by Jann Horn of Google Project Zero and | |
| reported to the Xen Security Team. | |
| References | |
| =========== | |
| [1] https://xenbits.xen.org/xsa/advisory-212.html | |
| [2] https://xenbits.xen.org/xsa/ | |
| [3] https://github.com/QubesOS/qubes-infrastructure/ | |
| [4] https://www.xenproject.org/security-policy.html | |
| [5] https://www.qubes-os.org/news/2016/07/21/new-hw-certification-for-q4/ | |
| [6] https://www.qubes-os.org/security/xsa/ | |
| -- | |
| The Qubes Security Team | |
| https://www.qubes-os.org/security/ |