Skip to content
Kubernetes with kubevirt and kata-containers inside Vagrant (with libvirt/kvm)
Branch: master
Clone or download
Pull request Compare This branch is 31 commits ahead of mintel:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
Vagrantfile Replace cgroup-driver kubelet option by kubeadm config file May 8, 2019

Kubernetes with kubevirt and kata-containers, inside Vagrant (with libvirt/kvm)

Provide a consistent way to run Kubernetes (k8s) with KubeVirt and Kata Containers, locally, provided nested virtualization is supported.

Note, this installs Kubernetes from “official” (from k8s) packages on Ubuntu 18.04, via kubeadm (and not minikube or another installer)

If this works well inside Vagrant, in principle, the same provisionning scripts could be used for installation of a similar cluster on a base operating system installed with Ubuntu.


Include configuration for :

  • the Ubuntu 18.04 Vagrant base image : generic/ubuntu1804 (from run inside qemu/kvm
  • Kubernetes installed ( with kubeadm, with :
    • the cri-o container runtime, which is supposed to be compatible both with KubeVirt and KataContainers (other options may be possible : containerd ?) :
    • and kubelet and cri-o configured the same way, for using the systemd cgroups driver
    • the Calico CNI network system (other options may be possible)
    • Rancher’s Local Path Provisioner to automatically allocate PersistentVolumes from a directory of the node’s file-system :
    • KubeVirt for deploying full-fledged cloud VMs on the cluster via qemu/kvm (in a similar way to IaaS clouds) :
    • the Containerized Data Importer for KubeVirt, to handle automatic import of VM images:
    • Kata Containers for running PODs/containers, sandboxed inside mini VMs (qemu too):

This is a reworked Vagrant setup, based on an initial version at, taking additions from and my own findings.


Here’s a recording of the Vagrant provisionning of the cluster :

Play the recording:

Starting the cluster inside Vagrant

Install Pre-requisites

Support for nested virtualization

For this to work, you need to have a base hardware whose CPUs provide virtualization support allowing “nested virtualization”. As we will run qemu/kvm VMs inside the qemu/kvm base started by Vagrant, this support is essential.

  • Check that you have support for this in your CPU
  • Check that the KVM modules are configured to allow it


Ensure you have vagrant installed, with its libvirt/KVM virtualization driver

You may install it using your distribution’s packages :


sudo pacman -S vagrant


sudo apt-get install vagrant

Run it

Clone this repo, then:

vagrant up --provider=libvirt

The long provisionning process will occur.

SSH into the VM

Once the provisionning ends, it’s ready.

You’ll perform most of the work inside the Vagrant VM:

vagrant ssh

Check the k8s cluster is up and running

kubectl get nodes

Access your code inside the VM

We automatically mount /tmp/vagrant into /home/vagrant/data.

For example, you may want to git clone some kubernetes manifests into /tmp/vagrant on your host-machine, then you can access them in the vagrant machine.

This is bi-directional, and achieved via vagrant-sshfs

Deploy stuff on the cluster

Once the k8s cluster is running you may test deployment of virtualized applications and systems.

Testing “regular cloud VMs” via KubeVirt

Basic VM instances

  • declare a Kubevirt virtual machine to be started with qemu/kvm:
    kubectl apply -f
    kubectl get vms
  • start the VM’s execution (takes a while: downloading VM image, etc.)
    virtctl start testvm
    # wait until the VM is started
    kubectl wait --timeout=180s --for=condition=Ready pod -l
    # you can check the execution of qemu
    ps aux | grep qemu-system-x86_64
  • connect to the VM’s console
    virtctl console testvm

    it may take a while to get messages on the console, and eventually a login prompt (press ENTER if need be)

Testing automatic VM image import with DataVolumes

We have prepared a few deployment manifests to test booting VMs from boot disk images specified from URLs.

Example with a Fedora machine

  • copy the =fedora-datavolume.yaml= manifest into the cluster host inside Vagrant:
    cp examples-kubevirt/fedora-datavolume.yaml /tmp/vagrant

    it will be available in ~vagrant/data/fedora-datavolume.yaml

  • connect via vagrant ssh, and:
    • create the DataVolume and VM Instance definitions:
      kubectl create -f data/fedora-datavolume.yaml
    • check that the DataVolume was created:
      kubectl get dv
      NAME        AGE 
      fedora28-dv 4m58s
    • check that the corresponding PersistentVolume Claim was allocated (automatically, thanks to the Local Path Provisioner):
      kubectl get pvc
      NAME        STATUS VOLUME                                   CAPACITY ACCESS MODES STORAGECLASS AGE 
      fedora28-dv Bound  pvc-b2bc560a-6b88-11e9-a6b2-525400a08028 10Gi     RWO          local-path   5m21s
    • look at the corresponding Persistent Volume:
      kubectl get pv
      pvc-b2bc560a-6b88-11e9-a6b2-525400a08028 10Gi     RWO          Delete         Bound  default/fedora28-dv local-path          5m20s
    • watch the importer download the boot disk image and convert it automatically, thanks to Containerized Data Importer (CDI), so that qemu can boot it:
      kubectl logs -f -l -l

      you’ll be able to check the growth of the contents of the PVC, where the disk.img boot disk for qemu will be constructed:

      du -sh /opt/local-path-provisioner/pvc-b2bc560a-6b88-11e9-a6b2-525400a08028/
      277M /opt/local-path-provisioner/pvc-b2bc560a-6b88-11e9-a6b2-525400a08028/
    • once the image is imported, watch the importer’s logs:
      kubectl logs -f -l
  • Finally, you can connect to the VM’s console:
    virtctl console testvmfedora29

Note that you may also manage import of cloud images via the Containerized Data Importer with:

mv ubuntu-18.04-server-cloudimg-amd64.img ubuntu-18.04-server-cloudimg-amd64.qcow2
virtctl image-upload --pvc-name=upload-pvc --pvc-size=10Gi --image-path=ubuntu-18.04-server-cloudimg-amd64.qcow2 --uploadproxy-url=https://$(kubectl get service -n cdi cdi-uploadproxy -o wide | awk 'NR==2 {print $3}'):443/ --insecure


You can also test, from inside the VM, the launch of containers inside “qemu sandboxing”:

kubectl apply -f

Once the container is running, you can run a shell inside it:

kubectl exec -it $(kubectl get pod -l run=php-apache-kata-qemu -o wide | awk 'NR==2 {print $1}') bash

Deploying a similar cluster on real OS

The scripts may be used, in the same order, to deploy a cluster on an (non-virtualized) Ubuntu 18.04 Server machine.

So far, only limitation found is related to AppArmor libvirt constraints preventing VMs to be started by KubeVirt.

Immediate workaround can be disabling it (which may not be the best idea, YMMV):

sudo ln -s /etc/apparmor.d/usr.sbin.libvirtd /etc/apparmor.d/disable/usr.sbin.libvirtd
You can’t perform that action at this time.