Skip to content

HOWTO: Creating images for additional Virtual Application types

maugustosilva edited this page Oct 9, 2019 · 12 revisions

Introduction

Before fully making use of CB, you will have to create images for all other Virtual Application types. While we provide a method for the automatic creation of all images, unfortunately some packages and binaries (e.g., coremark, parboil) simply cannot be automatically downloaded, due to licensing restrictions.

1. Download all third party requirements

Go to cbtool/3rd_party/workload/ and open the manually_download_files.txt. Follow the instructions there to download all requested files, and in the end you should have this directory populated with a series of .tar, .tgz, .deb and .rpm files (again, this procedure is required only for files that cannot be automatically downloaded).

2. Optional, but highly recommended for anyone experimenting with "pure Docker" clouds (PDM) : Build all the workloads from Dockerfiles

The automated image building mechanism used by CBTOOL can be executed against VMs, Containers (Docker and LXD), or even VMs. However, it extracts the actual commands used to install the dependencies directly from Dockerfiles. Therefore, it is always a good idea to try to build all the Virtual Application types (workloads) from its original Dockerfiles first. This manner, in case of successful build run, there is a high degree of probability that the building of the actual workload images, be it on VMs, Containers or Bare-metal will work.

Assuming that the Docker engine is co-located with the CBTOOL Orchestrator Node, just execute :

cd cbtool/docker
./build_workloads.sh -r <myrepository>

In case the node running the Docker engine is differente from the CBTOOOL Orchestrator Node, you will have to copy the contents of cbtool/docker directory to there, and the add an additional parameter to the execution:

cd cbtool/docker
./build_workloads.sh -r <myrepository> --rsync <FILESTORE_HOSTNAME>:<FILESTORE_PORT>/<CBTOOL_UNSERNAME>_cb 

For instance, using the previously mentioned CBTOOL Orchestrator Node, we will have, for a non-colocated Docker engine:

cd cbtool/docker
./build_workloads.sh -r <myrepository> --rsync 172.17.1.2:10000/cbuser_cb 

Typically, the building of Docker images for all Virtual Application types takes around 1 hour.

3. Also Optional, but highly recommended for anyone experimenting with "pure Libvirt" clouds (PLM) : Build all the workloads into qcow2 images

The automated image building mechanism used by CBTOOL can be executed against VMs, Containers (Docker and LXD), or even VMs. However, for the direct creation of qcow2 images, we leverage the virt-customize utility. This will result in qcow2 images which can be either directly used by "pure Libvirt" clouds or imported into other clouds (e.g., OpenStack). Please note that we recommend this process to be performed after all Docker images containing the workloads are created (in the previous step), just to increase the likelihood of a successful build here.

Assuming that the Libvirt daemon is co-located with the CBTOOL Orchestrator Node, just execute :

cd cbtool/kvm-qemu
./build_workloads.sh -r <path to my libvirt storage pool>

In case the node running the Libvirt daemon is differente from the CBTOOOL Orchestrator Node, you will have to copy the cbtool/kvm-qemu directory to there, and the add an additional parameter to the execution:

cd cbtool/kvm-qemu
./build_workloads.sh -r <path to my libvirt storage pool> --rsync <FILESTORE_HOSTNAME>-<FILESTORE_PORT>-<CBTOOL_UNSERNAME> 

For instance, using the previously mentioned CBTOOL Orchestrator Node, we will have, for a non-colocated Libvirt daemon:

cd cbtool/kvm-qemu
./build_workloads.sh -r <path to my libvirt storage pool> --rsync 172.17.1.2-10000-cbuser 

Typically, the building of qcow2 images for all Virtual Application types takes around 2 hours.

4. Build images for the rest of the Virtual Application types:

At this point we: a) downloaded all the third-party requirements that required manual intervention, b) made it available through an rsync server running on the CBTOOL Orchestrator Node and c) checked that all images can be built from Dockerfiles. We can now proceed to the other images.

For instance, to create an image for the "iperf" VApp (typeshow iperf for more information), use this new already created "nullworkload" image as a base (this is highly recommended, albeit not strictly necessary), restart the previously described procedure, skipping directly to step 2.3 (i.e. vmattach check:cb_nullworkload:ubuntu:iperf, login and run the install command, followed by vmcapture youngest cb_iperf)

5. Restart CBTOOL again

When done, exit the CBTOOL cli, and re-execute it forcing a re-read of the configuration file with cb --soft_reset (--hard_reset could also be used).

NEXT STEP: Proceed to the section Run simple experiments

Clone this wiki locally