Skip to content

Latest commit

 

History

History
235 lines (164 loc) · 6.41 KB

libvirt-coreos.md

File metadata and controls

235 lines (164 loc) · 6.41 KB

Getting started with libvirt CoreOS

Highlights

  • Super-fast cluster boot-up (few seconds instead of several minutes for vagrant)
  • Reduced disk usage thanks to COW
  • Reduced memory footprint thanks to KSM

Prerequisites

  1. Install qemu
  2. Install libvirt
  3. Grant libvirt access to your user¹
  4. Check that your $HOME is accessible to the qemu user²

¹ Depending on your distribution, libvirt access may be denied by default or may require a password at each access.

You can test it with the following command:

virsh -c qemu:///system pool-list

If you have access error messages, please read https://libvirt.org/acl.html and https://libvirt.org/aclpolkit.html .

In short, if your libvirt has been compiled with Polkit support (ex: Arch, Fedora 21), you can create /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules with the following content to grant full access to libvirt to $USER

polkit.addRule(function(action, subject) {
        if (action.id == "org.libvirt.unix.manage" &&
            subject.user == "$USER") {
                return polkit.Result.YES;
                polkit.log("action=" + action);
                polkit.log("subject=" + subject);
        }
});

(Replace $USER with your login name)

If your libvirt has not been compiled with Polkit (ex: Ubuntu 14.04.1 LTS), check the permissions on the libvirt unix socket:

ls -l /var/run/libvirt/libvirt-sock
srwxrwx--- 1 root libvirtd 0 févr. 12 16:03 /var/run/libvirt/libvirt-sock

usermod -a -G libvirtd $USER
# $USER needs to logout/login to have the new group be taken into account

(Replace $USER with your login name)

² Qemu will run with a specific user. It must have access to the VMs drives

All the disk drive resources needed by the VM (CoreOS disk image, kubernetes binaries, cloud-init files, etc.) are put inside ./cluster/libvirt-coreos/libvirt_storage_pool.

As we’re using the qemu:///system instance of libvirt, qemu will run with a specific user:group distinct from your user. It is configured in /etc/libvirt/qemu.conf. That qemu user must have access to that libvirt storage pool.

If your $HOME is world readable, everything is fine. If your $HOME is private, cluster/kube-up.sh will fail with an error message like:

error: Cannot access storage file '$HOME/.../kubernetes/cluster/libvirt-coreos/libvirt_storage_pool/kubernetes_master.img' (as uid:99, gid:78): Permission denied

In order to fix that issue, you have several possibilities:

  • set POOL_PATH inside cluster/libvirt-coreos/config-default.sh to a directory:
    • backed by a filesystem with a lot of free disk space
    • writable by your user;
    • accessible by the qemu user.
  • Grant the qemu user access to the storage pool.

On Arch:

setfacl -m g:kvm:--x ~

Setup

By default, the libvirt-coreos setup will create a single kubernetes master and 3 kubernetes minions. Because the VM drives use Copy-on-Write and because of memory ballooning and KSM, there is a lot of resource over-allocation.

To start your local cluster, open a shell and run:

cd kubernetes

export KUBERNETES_PROVIDER=libvirt-coreos
cluster/kube-up.sh

The KUBERNETES_PROVIDER environment variable tells all of the various cluster management scripts which variant to use. If you forget to set this, the assumption is you are running on Google Compute Engine.

The NUM_MINIONS environment variable may be set to specify the number of minions to start. If it is not set, the number of minions defaults to 3.

You can check that your machines are there and running with:

virsh -c qemu:///system list
 Id    Name                           State
----------------------------------------------------
 15    kubernetes_master              running
 16    kubernetes_minion-01           running
 17    kubernetes_minion-02           running
 18    kubernetes_minion-03           running

You can check that the kubernetes cluster is working with:

$ ./cluster/kubectl.sh get minions
NAME                LABELS              STATUS
192.168.10.2        <none>              Ready
192.168.10.3        <none>              Ready
192.168.10.4        <none>              Ready

The VMs are running CoreOS. Your ssh keys have already been pushed to the VM. (It looks for ~/.ssh/id_*.pub) The user to use to connect to the VM is core. The IP to connect to the master is 192.168.10.1. The IPs to connect to the minions are 192.168.10.2 and onwards.

Connect to kubernetes_master:

ssh core@192.168.10.1

Connect to kubernetes_minion-01:

ssh core@192.168.10.2

Interacting with your Kubernetes cluster with the kube-* scripts.

All of the following commands assume you have set KUBERNETES_PROVIDER appropriately:

export KUBERNETES_PROVIDER=libvirt-coreos

Bring up a libvirt-CoreOS cluster of 5 minions

NUM_MINIONS=5 cluster/kube-up.sh

Destroy the libvirt-CoreOS cluster

cluster/kube-down.sh

Uptade the libvirt-CoreOS cluster with a new Kubernetes release:

cluster/kube-push.sh

Interact with the cluster

cluster/kubectl.sh

Troubleshooting

!!! Cannot find kubernetes-server-linux-amd64.tar.gz

Build the release tarballs:

make release

Can't find virsh in PATH, please fix and retry.

Install libvirt

On Arch:

pacman -S qemu libvirt

On Ubuntu 14.04.1:

aptitude install qemu-system-x86 libvirt-bin

On Fedora 21:

yum install qemu libvirt

error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory

Start the libvirt daemon

On Arch:

systemctl start libvirtd

On Ubuntu 14.04.1:

service libvirt-bin start

error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': Permission denied

Fix libvirt access permission (Remember to adapt $USER)

On Arch and Fedora 21:

cat > /etc/polkit-1/rules.d/50-org.libvirt.unix.manage.rules <<EOF
polkit.addRule(function(action, subject) {
        if (action.id == "org.libvirt.unix.manage" &&
            subject.user == "$USER") {
                return polkit.Result.YES;
                polkit.log("action=" + action);
                polkit.log("subject=" + subject);
        }
});
EOF

On Ubuntu:

usermod -a -G libvirtd $USER