Skip to content
AMD Secure Encrypted Virtualization
Shell
Branch: master
Clone or download
Latest commit bfa123c May 29, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
distros add RHEL8 build script May 29, 2019
xmls add sample xml file Aug 6, 2018
.gitignore Multiple updates Jul 31, 2018
README.md Add RHEL-8 instruction May 29, 2019

README.md

Table of contents

Secure Encrypted Virtualization (SEV)

SEV is an extension to the AMD-V architecture which supports running encrypted virtual machine (VMs) under the control of KVM. Encrypted VMs have their pages (code and data) secured such that only the guest itself has access to the unencrypted version. Each encrypted VM is associated with a unique encryption key; if its data is accessed to a different entity using a different key the encrypted guests data will be incorrectly decrypted, leading to unintelligible data.

SEV support has been accepted in upstream projects. This repository provides scripts to build various components to enable SEV support until the distros pick the newer version of components.

To enable the SEV support we need the following versions.

Project Version
kernel >= 4.16
libvirt >= 4.5
qemu >= 2.12
ovmf >= commit (75b7aa9528bd 2018-07-06 )
  • Installing newer libvirt may conflict with existing setups hence script does not install the newer version of libvirt. If you are interested in launching SEV guest through the virsh commands then build and install libvirt 4.5 or higher. Use LaunchSecurity tag https://libvirt.org/formatdomain.html#sev for creating the SEV enabled guest.

  • SEV support is not available in SeaBIOS. Guest must use OVMF.

SLES-15

SUSE Linux Enterprise Server 15 GA includes the SEV support; we do not need to compile the sources.

SLES-15 does not contain the updated libvirt packages yet hence we will use QEMU command line interface to launch VMs.

Prepare Host OS

SEV is not enabled by default, lets enable it through kernel command line:

Append the following in /etc/defaults/grub

GRUB_CMDLINE_LINUX_DEFAULT=".... mem_encrypt=on kvm_amd.sev=1"

Regenerate grub.cfg and reboot the host

# grub2-mkconfig -o /boot/efi/EFI/sles/grub.cfg
# reboot

Install the qemu launch script. The launch script can be obtained from this project.

# git clone https://github.com/AMDESE/AMDSEV.git
# cd AMDSEV/distros/sles-15
# ./build.sh

Prepare VM image

Create empty virtual disk image

# qemu-img create -f qcow2 sles-15.qcow2 30G

Create a new copy of OVMF_VARS.fd. The OVMF_VARS.fd is a "template" used to emulate persistent NVRAM storage. Each VM needs a private, writable copy of VARS.fd.

#cp /usr/share/qemu/ovmf-x86_64-suse-4m-vars.bin OVMF_VARS.fd 

Download and install sles-15 guest

# launch-qemu.sh -hda sles-15.qcow2 -cdrom SLE-15-Installer-DVD-x86_64-GM-DVD1.iso -nosev

Follow the screen to complete the guest installation.

Launch VM

Use the following command to launch SEV guest

# launch-qemu.sh -hda sles-15.qcow2

NOTE: when guest is booting, CTRL-C is mapped to CTRL-], use CTRL-] to stop the guest

RHEL-8

RedHat Enterprise Linux 8.0 GA includes the SEV support; we do not need to compile the sources.

Prepare Host OS

SEV is not enabled by default, lets enable it through kernel command line:

Append the following in /etc/defaults/grub

GRUB_CMDLINE_LINUX_DEFAULT=".... mem_encrypt=on kvm_amd.sev=1"

Regenerate grub.cfg and reboot the host

# grub2-mkconfig -o /boot/efi/EFI/redhat/grub.cfg
# reboot

Install the qemu launch script. The launch script can be obtained from this project.

# git clone https://github.com/AMDESE/AMDSEV.git
# cd AMDSEV/distros/rhel-8
# ./build.sh

Prepare VM image

Create empty virtual disk image

# qemu-img create -f qcow2 rhel-8.qcow2 30G

Create a new copy of OVMF_VARS.fd. The OVMF_VARS.fd is a "template" used to emulate persistent NVRAM storage. Each VM needs a private, writable copy of VARS.fd.

#cp /usr/share/OVMF/OVMF_VARS.fd OVMF_VARS.fd 

Download and install rhel-8 guest

# launch-qemu.sh -hda rhel-8.qcow2 -cdrom RHEL-8.0.0-20190404.2-x86_64-dvd1.iso

Follow the screen to complete the guest installation.

Launch VM

Use the following command to launch SEV guest

# launch-qemu.sh -hda rhel-8.qcow2

NOTE: when guest is booting, CTRL-C is mapped to CTRL-], use CTRL-] to stop the guest

Fedora-28

Fedora-28 includes newer kernel and ovmf packages but has older qemu. We will need to update the QEMU to launch SEV guest.

Prepare Host OS

SEV is not enabled by default, lets enable it through kernel command line:

Append the following in /etc/defaults/grub

GRUB_CMDLINE_LINUX_DEFAULT=".... mem_encrypt=on kvm_amd.sev=1"

Regenerate grub.cfg and reboot the host

# grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
# reboot

Build and install newer qemu

# cd distros/fedora-28
# ./build.sh

Prepare VM image

Create empty virtual disk image

# qemu-img create -f qcow2 fedora-28.qcow2 30G

Create a new copy of OVMF_VARS.fd. The OVMF_VARS.fd is a "template" used to emulate persistent NVRAM storage. Each VM needs a private, writable copy of VARS.fd.

# cp /usr/share/OVMF/OVMF_VARS.fd OVMF_VARS.fd

Download and install fedora-28 guest

# launch-qemu.sh -hda fedora-28.qcow2 -cdrom  Fedora-Workstation-netinst-x86_64-28-1.1.iso

Follow the screen to complete the guest installation.

Launch VM

Use the following command to launch SEV guest

# launch-qemu.sh -hda fedora-28.qcow2

NOTE: when guest is booting, CTRL-C is mapped to CTRL-], use CTRL-] to stop the guest

Fedora-29

Fedora-29 contains all the pre-requisite packages to launch an SEV guest. But the SEV feature is not enabled by default, this section documents how to enable the SEV feature.

Prepare Host OS

  • Add new udev rule for the /dev/sev device

    # cat /etc/udev/rules.d/71-sev.rules
    KERNEL=="sev", MODE="0660", GROUP="kvm"
    
  • Clean libvirt caches so that on restart libvirt re-generates the capabilities

    # rm -rf /var/cache/libvirt/qemu/capabilities/
    
  • The default FC-29 kernel (4.18) has SEV disabled in config files, but the kernel available through the FC-29 update has SEV config set

    Use the following command to upgrade the packages and also install the virtulization packages

    # yum groupinstall virtualization
    # yum upgrade
    
  • By default SEV is disabled, append the following in /etc/defaults/grub

     GRUB_CMDLINE_LINUX_DEFAULT=".... mem_encrypt=on kvm_amd.sev=1"
    

    Regenerate grub.cfg and reboot the host

     # grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg
     # reboot
    
  • Install the qemu launch script

     # cd distros/fedora-29
     # ./build.sh
    

Prepare VM image

Create empty virtual disk image

# qemu-img create -f qcow2 fedora-29.qcow2 30G

Create a new copy of OVMF_VARS.fd. The OVMF_VARS.fd is a "template" used to emulate persistent NVRAM storage. Each VM needs a private, writable copy of VARS.fd.

# cp /usr/share/edk2/ovmf/OVMF_VARS.fd OVMF_VARS.fd

Download and install fedora-29 guest

# launch-qemu.sh -hda fedora-29.qcow2 -cdrom  Fedora-Workstation-netinst-x86_64-29-1.1.iso

Follow the screen to complete the guest installation.

Launch VM

Use the following command to launch SEV guest

# launch-qemu.sh -hda fedora-29.qcow2

NOTE: when guest is booting, CTRL-C is mapped to CTRL-], use CTRL-] to stop the guest

Ubuntu 18.04

Ubuntu 18.04 does not includes the newer version of components to be used as SEV hypervisor hence we will build and install newer kernel, qemu, ovmf.

Prepare Host OS

  • Enable source repositories See

  • Build and install newer components

# cd distros/ubuntu-18.04
# ./build.sh

Prepare VM image

Create empty virtual disk image

# qemu-img create -f qcow2 ubuntu-18.04.qcow2 30G

Create a new copy of OVMF_VARS.fd. The OVMF_VARS.fd is a "template" used to emulate persistent NVRAM storage. Each VM needs a private, writable copy of VARS.fd.

# cp /usr/local/share/qemu/OVMF_VARS.fd OVMF_VARS.fd

Install ubuntu-18.04 guest

# launch-qemu.sh -hda ubuntu-18.04.qcow2 -cdrom ubuntu-18.04-desktop-amd64.iso

Follow the screen to complete the guest installation.

Launch VM

Use the following command to launch SEV guest

# launch-qemu.sh -hda ubuntu-18.04.qcow2

NOTE: when guest is booting, CTRL-C is mapped to CTRL-], use CTRL-] to stop the guest

openSUSE-Tumbleweed

Latest version of openSUSE Tumbleweed distro contains all the pre-requisite packages to launch an SEV guest. But the SEV feature is not enabled by default, this section documents how to enable the SEV feature.

Prepare Host OS

  • Add new udev rule for the /dev/sev device

    # cat /etc/udev/rules.d/71-sev.rules
    KERNEL=="sev", MODE="0660", GROUP="kvm"
    
  • Clean libvirt caches so that on restart libvirt re-generates the capabilities

    # rm -rf /var/cache/libvirt/qemu/capabilities/
    # systemctl restart libvirtd
    
  • SEV feature is not enabled in kernel by default, lets enable it through kernel command line:

    Append the following in /etc/defaults/grub

     GRUB_CMDLINE_LINUX_DEFAULT=".... mem_encrypt=on kvm_amd.sev=1"
    

    Regenerate grub.cfg and reboot the host

    # grub2-mkconfig -o /boot/efi/EFI/opensuse/grub.cfg
    # reboot
    

Launch SEV VM

Since virt-manager does not support SEV yet hence we need to use 'virsh' command to launch the SEV guest. See xmls/sample.xml on how to add SEV specific information in existing xml. Use the following command to launch SEV guest

# virsh create sample.xml

The sample xml was generated through virt-manager and then edited with SEV specific information. The main changes are:

  • For virtio devices we need to enable DMA APIs. The DMA APIs are enable through (aka iommu_platform=on) tag
    <controller type='virtio-serial' index='0'> 
      <driver iommu='on' /> 
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> 
    </controller>
  • Add LaunchSecurity tag to tell libvirt to enable memory-encryption
    <launchSecurity type='sev'>
      <policy>0x0001</policy>
      <cbitpos>47</cbitpos>
      <reducedPhysBits>1</reducedPhysBits>
    </launchSecurity>
  • QEMU pins the guest memory during the SEV guest launch hence we need to set the domain specific memory parameters to raise the memlock rlimits. e.g the below memtune tags raise the memlock limit to 5GB.
    <memtune>
      <hard_limit unit='G'>5</hard_limit>
      <soft_limit unit='G'>5</soft_limit>
    </memtune>  

SEV Containers

Container runtimes that use hardware virtualization to further isolate container workloads can also make use of SEV. As a proof-of-concept, the kata branch contains an SEV-capable version of the Kata Containers runtime that will start all containers inside of SEV virtual machines.

For installation instructions on Ubuntu systems, see the README.

Additional Resources

SME/SEV white paper

SEV API Spec

APM Section 15.34

KVM forum slides

KVM forum videos

Linux kernel

Linux kernel

Libvirt LaunchSecurity tag

Libvirt SEV domainCap

Qemu doc

FAQ

  • How do I know if hypervisor supports SEV feature ?

    a) When using libvirt >= 4.15 run the following command

    # virsh domcapabilities
    

    If hypervisor supports SEV feature then sev tag will be present.

    See Libvirt DomainCapabilities feature for additional information.

    b) Use qemu QMP 'query-sev-capabilities' command to check the SEV support. If SEV is supported then command will return the full SEV capabilities (which includes host PDH, cert-chain, cbitpos and reduced-phys-bits).

    See QMP doc for details on how to interact with QMP shell.

  • How do I know if SEV is enabled in the guest ?

    a) Check the kernel log buffer for the following message

    # dmesg | grep -i sev
    AMD Secure Encrypted Virtualization (SEV) active
    

    b) MSR 0xc0010131 (MSR_AMD64_SEV) can be used to determine if SEV is active

    # rdmsr -a 0xc0010131
    
    Bit[0]:   0 = SEV is not active
              1 = SEV is active
    

  • Can I use virt-manager to launch SEV guest?

    virt-manager uses libvirt to manage VMs, SEV support has been added in libvirt but virt-manager does use the newly introduced LaunchSecurity tags yet hence we will not able to launch SEV guest through the virt-manager.

    If your system is using libvirt >= 4.15 then you can manually edit the xml file to use LaunchSecurity to enable the SEV support in the guest.

  • How to increase SWIOTLB limit ?

When SEV is enabled, all the DMA operations inside the guest are performed on the shared memory. Linux kernel uses SWIOTLB bounce buffer for DMA operations inside SEV guest. A guest panic will occur if kernel runs out of the SWIOTLB pool. Linux kernel default to 64MB SWIOTLB pool. It is recommended to increase the swiotlb pool size to 512MB. The swiotlb pool size can be increased in guest by appending the following in the grub.cfg file

Append the following in /etc/defaults/grub

GRUB_CMDLINE_LINUX_DEFAULT=".... swiotlb=262144"

And regenerate the grub.cfg.

  • virtio-blk device runs out-of-dma-buffer error

To support the multiqueue mode, virtio-blk drivers inside the guest allocates large number of DMA buffer. SEV guest uses SWIOTLB for the DMA buffer allocation or mapping hence kernel runs of the SWIOTLB pool quickly and triggers the out-of-memory error. In those cases consider increasing the SWIOTLB pool size or use virtio-scsi device.

You can’t perform that action at this time.