Skip to content

Quick Start

Manuel Mendez edited this page Jul 17, 2015 · 9 revisions

Lochness Quick Start

Objective

To setup and boot a 3 node mistify-os/lochness cluster and demonstrate creation of guest virtual machines.

You can follow along to the screen cast here, or download from here. HOORAY!

Process

Setting up a lochness cluster is broken up into two main steps, setting up the etcd cluster and setting up the lochness cluster-services.

Files / Scripts

All the files used here are available from this wiki through these links:

File Link
Guest Config Setup guest-setup.sh
HV VM Network Setup ifup
HV Host Network Setup net.sh
Node0 Script node0.sh
Node1&2 Script node
HV Node HW Info nodes.sh
Rootfs Image initrd.mistify
Kernel Image bzImage.mistify

Setup and etcd cluster

The etcd cluster is setup by first using a temporary etcd cluster on node0 and using it to have the remaining nodes, node1 and node2, auto-configure themselves. Nodes 1 & 2 will setup the permanent etcd cluster between themselves, and node0 will then join that cluster. The high level steps to accomplish this are:

  1. Ensure disks are unpartitioned
  2. Boot node0 (use ./node0.sh if testing with VMs)
  3. Configure the cluster by editing /root/cluster-init-config within node0
  4. Setup cluster using a temporary single node etcd cluster by running /root/cluster-init within node0
  5. Upload the vmlinuz and initrd files to /var/lib/images/0.1.0 on node0
  6. Boot node1 and node2 (use ./node 1/2 if testing with VMs)
  7. Nodes 1 & 2 will create the permanent etcd cluster
  8. node0 dumps the current etcd cluster data to a backup file
  9. node0 destroys temporary etcd cluster and joins the permanent cluster
  10. node0 restores the cluster data using the dumped contents

Setup lochness cluster-services

The cluster now needs to be finised by starting the necessary lochness services. The key services are those that will manage hypervisor node configurations (chypervisor), manage guest state (cguestd), select hypervisor node to place a guest on (cplacerd), and a task broker (cworkerd). These services can be run on any host, but for simplicity we will run them on node0.

cluster-init is only tasked with setting up the etcd cluster and its nodes. We now have to setup the services necessary for lochness to be able to create/delete/start/stop guests. These services are what we term cluster services, services where one or more instances run throughout the cluster on any configured node. The cluster services differ from node services in that each node service runs on each individual node. The list of services are:

Node Services

  • etcd (full member or proxy)
  • mistify-agent (and subagents)
  • nconfigd
  • nfirewalld (not used yet)
  • nheartbeatd

Cluster Services

  • beanstalkd
  • cbootstrapd
  • cdhcpd & dhcpd
  • cguestd
  • chypervisord
  • cplacerd
  • cwokerd
  • image
  • named
  • tftpd

chypervisord

chypervisord is the service that manages hypervisor configs, these include ip address, uuid/hostname, and the cluster-services which run on the node. We need to set it up on node0 by using etcdctl.

ssh hv0 'etcdctl set /lochness/hypervisors/$(hostname)/config/chypervisord true'

nconfigd will notice the change in config and start the chypervisord service, this can be verified by running systemctl status chypervisord.service.

Remaining Services

The remaining cluster-services can be configured from outside the cluster nodes using the operator cli application hv. hv is used to connect to chypervisord and setup/modify nodes.

# node0's uuid/hostname
id='44454c4c-3900-1039-8046-b8c04f593132'
hv config modify -s http://hv0:17000 $id '{"beanstalkd":"true"}'
hv config modify -s http://hv0:17000 $id '{"cwokerd":"true","cplacerd":”true”,"cguestd":"true"}'

Virtual Machines

In order to get guest virtual machines running the guest configuration must be setup. This includes images, networks, subnets, flavors and firewall rules (firewall is not currently used). The guest-setup.sh script has an initial configuration to start off with and will be run on one of the nodes so that a guest can will be placed on it. guest-setup.sh will print an example call to guest to create a new vm guest.

ssh root@hv1 < guest-setup.sh
# ...
# information about the configs is printed
# ending in a line like
# guest create -s http://hv0:18000 '{"flavor":"a59b540a-6b49-4284-a955-43238bc1c928","fwgroup":"e6d012e9-d845-4c72-a4ce-3a2fcd159f89","network":"6daea4b6-95d0-42a8-bbe0-612ed3396235","mac":"A4:75:C1:6B:E3:50"}'
# which can be run to create a guest

Note:

Guest source image fetching is hard-coded to be retrieved from builds.mistify.io/guest-images, which has a very slow uplink. It is recommended to mirror the files locally on the network/hv-vms-host (python -m SimpleHTTPServer 80 works well), and then to edit guest-setup.sh to retrieve the images from there.

-server='builds.mistify.io'
+server='10.10.10.10'

Guest CLI

The guest cli application is synonymous to the hv application, it is used to configure guests and to run actions on guests. All guest invocations will return a job-id which can be used later with the guest job subcommand to retrieve status/information of the action requested.

Everything should be in place to manage guests. Run guests -h to see a list of actions to take, start with create (using the example printed from before) and then start.

You can’t perform that action at this time.