Skip to content

Latest commit

 

History

History
471 lines (340 loc) · 18 KB

Setup.md

File metadata and controls

471 lines (340 loc) · 18 KB

Installation & Configuration Guide

  • The purpose of this guide is to create a fully functional working virtualization node with LXD support.
  • For the list of supported features see Readme.

Table of Contents

1 - Frontend Setup

1.1 Installation

Follow frontend installation in the OpenNebula deployment guide.

1.2 LXDoNe integration

  • The LXDoNe drivers must be installed on the OpenNebula frontend server to add LXD virtualization and monitoring support.

1.2.1 Drivers

Download the latest release and extract it to the oneadmin drivers directory

tar -xf <lxdone-release>.tar.gz
cp -r addon-lxdone-*/src/remotes/ /var/lib/one/

Set the appropriate permissions on the newly installed drivers

cd /var/lib/one/remotes/
chown -R oneadmin:oneadmin vmm/lxd im/lxd*
chmod 755 -R vmm/lxd im/lxd*
chmod 644 im/lxd.d/collectd-client.rb
cd -

Optional: Add support for 802.1Q driver (VLANs)

Replace /var/lib/one/remotes/vnm.rb with the one from the addon-lxdone.

cp -rpa addon-lxdone-*/src/one_wait/nic.rb /var/lib/one/remotes/vnm/nic.rb
chown oneadmin:oneadmin /var/lib/one/remotes/vnm/nic.rb
chmod 755 /var/lib/one/remotes/vnm/nic.rb

Note: A pull request has been made to OpenNebula's official Network Driver to add 802.1Q functionality by default.

1.2.2 Enable LXD

Modify /etc/one/oned.conf as root. Under Information Driver Configuration add this:

#---------------------------
# lxd Information Driver Manager Configuration
# -r number of retries when monitoring a host
# -t number of threads, i.e. number of hosts monitored at the same time
#---------------------------
IM_MAD = [ NAME = "lxd",
EXECUTABLE = "one_im_ssh",
ARGUMENTS = "-r 3 -t 15 lxd" ]
#---------------------------

Under Virtualization Driver Configuration add this:

#---------------------------
# lxd Virtualization Driver Manager Configuration
# -r number of retries when monitoring a host
# -t number of threads, i.e. number of actions performed at the same time
#---------------------------
VM_MAD = [ NAME = "lxd",
EXECUTABLE = "one_vmm_exec",
ARGUMENTS = "-t 15 -r 0 lxd",
KEEP_SNAPSHOTS = "yes",
TYPE = "xml",
IMPORTED_VMS_ACTIONS = "migrate, live-migrate, terminate, terminate-hard, undeploy, undeploy-hard, hold, release, stop, suspend, resume, delete, delete-recreate, reboot, reboot-hard, resched, unresched, poweroff, poweroff-hard, disk-attach, disk-detach, nic-attach, nic-detach, snap-create, snap-delete"]
#---------------------------

Restart OpenNebula

systemctl restart opennebula

2 - Virtualization Node setup

Installed KVM Packages which are not needed for LXD support

  • The opennebula-node package installs KVM-required software.
  • Many of the KVM packages may be removed if you do not want to support KVM VM's.
  • If you will be using Ceph storage, do not remove the libvirt package.

2.1 Install required packages

Ubuntu 16.04 (Xenial Xerus)

apt install lxd lxd-tools python-pylxd/xenial-updates \
                criu bridge-utils \
                python-ws4py python-pip

Check that pylxd is >= 2.0.5 or LXDoNe will not work correctly.

dpkg -s python-pylxd | grep 2.0.5 || echo "ERROR pylxd version not 2.05"

pip-installed software

  • isoparser can parse the ISO 9660 disk image format including Rock Ridge extensions.*
pip install isoparser

2.2 VNC server

  • LXDoNe uses Simple VNC Terminal Emulator (svncterm) as the VNC server.
  • svncterm is a fork of vncterm, with a simplified codebase, including the removal of tls support.
  • svncterm allows the VNC option to be used in the VM template definition.
  • We provide a package for ubuntu 16.04 in our releases section. For compiling svncterm for your distro follow instructions on svncterm's README

Install svncterm

wget https://github.com/OpenNebula/addon-lxdone/releases/download/v5.2-4.1/svncterm_1.2-1ubuntu_amd64.deb
apt install libjpeg62 libvncserver1 && dpkg -i svncterm_1.2-1ubuntu_amd64.deb

2.3 oneadmin user

Allow oneadmin to execute commands as root using sudo without password and add it to the lxd group.

echo "oneadmin ALL= NOPASSWD: ALL" >> /etc/sudoers.d/oneadmin
usermod -a -G lxd oneadmin

2.4 Loopback devices

  • Every File System Datastore image used by LXDoNe will require one loop device.
  • The default limit for loop devices is 8.
  • To allow the system to run more LXD containers than 8, the loopback device limit must be increased:
echo "options loop max_loop=128" >> /etc/modprobe.d/local-loop.conf
echo "loop" >> /etc/modules-load.d/modules.conf
depmod

2.5 LXD

2.5.1 Configure the LXD Daemon

  • By default, LXD does not activate a TCP listener, it listens on a local unix socket, and so it's not available via the network.
lsof | grep -i lxd | egrep --color '(LISTEN|sock)'

lxc config show
  • To use LXD with OpenNebula and the LXDoNe addon, we need to configure LXD to have a TCP listener.
lxd init --auto \
         --storage-backend dir \
         --network-address 0.0.0.0 \
         --network-port    8443 \
         --trust-password  password

lsof | grep -i lxd | egrep --color '(LISTEN|sock)'

lxc config show
  • storage-backend: LXD supports many different storage backend types.
    • Common backends are the ZFS filesystem or dir.
    • LXDoNe supports Ceph backend since the initial release using some workarounds, when LXD did not. Official support from LXD devs has been recently added.
  • network-address: 0.0.0.0 will instruct LXD to listen on all available IP's and interfaces.
  • trust-password: used by remote clients to vouch for their client certificate.
    • By default, LXD will allow all members of group lxd to talk to it over the UNIX socket.
    • Remote TCP network communication is authorized using SSL certificates.
    • When a new remote client registers with LXD as a known remote client, the trust password is provided. Once the remote client's certificates has been trusted by LXD, the trust password will not need to be re-entered.

2.5.2 LXD Profile

  • Containers inherit properties from a configuration profile.
  • The installation of LXD will create a default profile which needs to be modified to work with OpenNebula.
lxc profile list
lxc profile show default

Network

  • The default profile contains a definition for the eth0 network device.
  • Because eth0 is not managed by OpenNebula, the eth0 device needs to be removed from the profile.
lxc profile device remove default eth0

Autostarting

  • When the LXD server is shut down, containers will go to the state they were in the moment of powering off the LXD host.
  • That is, if a container was running before shutting down the LXD server machine, the container will start after the server boots up.
  • This behaviour may seem appropriate, but it is not managed by OpenNebula, so it needs to be removed.
lxc profile set default boot.autostart 0

Security and Nesting

lxc profile unset default security.privileged
lxc profile unset default security.nesting

Unix Block device Mounting

 echo Y > /sys/module/ext4/parameters/userns_mounts
  • It should be made permanent, you can achieve that using /etc/rc.local

Wierd profile config

  • A while ago, since some lxd 2.0.x, the default profile started to come with some weird configuration related to networking.
  • This can mess up your internal container config by modifying their internal environmental variables. We advice to remove it and maybe ask the LXD devs about this.
 lxc profile unset default user.network_mode
 lxc profile unset default environment.http_proxy

2.6.3 User IDs

  • Containers run in namespaces.
  • A user namespace has a mapping from host uids to container uids.
  • For instance, the range of host uids 100000-165536 might be mapped to container uids 0-65536.
  • The subordinate user id (subuid) and subordinate group id (subgid) files must have entries for lxd and root to allow LXD containers to work.
  • The LXDoNe recommended starting ID is 100000, with a range of 65536.
egrep --color '(lxd|root)' /etc/subuid /etc/subgid

lxd:100000:65536
root:100000:65536

3 - Add a LXD Image

  • LXD native images are basically compressed files.
  • However, OpenNebula uses block based images by default.
  • Because the formats are different, the default LXD images will not work with LXDoNe.
  • LXD Images must be converted to the OpenNebula compatible format. More information about creating OpenNebula compatible LXD images is available here

Download a pre-built image

You can download a pre-built OpenNebula-compatible LXD Image for Ubuntu 16.04 from the OpenNebula marketplace.

  • !!THIS IMAGE MAY BE OUTDATED!! Since we don't have an OpenNebula-compatible storage for the marketplace we cannot update this image too often.
  • The default username is: team
  • The default password for the team user is: team

We keep updated images in MEGA, Dropbox and Google Drive

4 - Add a LXD Virtualization Node to OpenNebula

  • In the OpenNebula web console in the Infrastructure section
  • Choose Hosts
  • Create a Host.
  • Type: Custom
  • Drivers:
    • Virtualization: Select Custom
    • Information: Select Custom
    • Custom VMM_MAD: lxd
    • Custom IM_MAD: lxd

host

5 - Add the LXD bridged network

  • In the OpenNebula web console in the Network section
  • Choose Virtual Networks
  • Create a new network
  • On the General tab
    • Set the Name
  • On the Conf tab
    • Bridge br0 or lxdbr0
  • On the Addresses tab
    • Select IPv4 if using br0, Ethernet if using lxdbr0, or an external DHCP service
    • First IP/MAC address: some private IP available on the network.
    • Size: 200

nic

6 - LXD Template Creation in OpenNebula

  • In the OpenNebula web console in the Templates section

  • Choose VMs

  • Create a VM Template.

  • General:

    • Name
    • Memory (ex. 32MB)
    • CPU (ex. 0.1)
    • VCPU (optional ex. 1)
      • VCPU stands for the amount of cores the container can use.
      • If left blank, the container will use all the cores up to a fraction defined by the CPU.
      • For a host with 8 CPUs, if the VM template states 2 VCPU, then the container has 2/8 CPUs allocated. ​
  • Storage:

    • Select on Disk 0 the source image ID that will provide the disk.

template

Optional data

  • Network:
    • Select one or many network interfaces. They will appear inside the container configured.
  • Input/Output:
    • Select VNC under graphics.
  • Other:
    • LXD_SECURITY_NESTING = 'true' for creating containers inside the container.
    • LXD_SECURITY_PRIVILEGED = 'true' to make the container privileged.

security

7 - Provision a new LXD Container from the template

  • In the OpenNebula web console in the Instances section
  • Choose VMs
  • Add a VM
  • Select an LXD template and click Create.
  • Wait for the scheduler to execute the drivers.
  • In the Log section there will be additional information like the time spent on executing actions scripts and any errors if they occur.

log

  • If VNC is enabled for the container, the graphic session will start in the login prompt inside the container.

vnc