Skip to content
This repository has been archived by the owner on Sep 23, 2020. It is now read-only.

sleeper

timf edited this page Feb 9, 2011 · 5 revisions

Preview3: Sleeper

If you have not done so already, retrieve the sample launch plans:

$ git clone https://github.com/ooici/launch-plans.git
$ cd launch-plans

It does not matter where they go, but you need to activate the virtualenv that we installed already:

$ which cloudinitd
/tmp/venv/bin/cloudinitd

Follow the simple steps in "sandbox/burned-sleeper/README.txt"

$ ls launch-plans/sandbox/burned-sleeper/

Those steps will install the provisioner credentials (see the main page for an explanation of what the provisioner is doing and why it needs its own credentials delivered).

Once the provisioner credentials are loaded, we just need to kick off the launch-plan. The idea of a launch-plan in general is that it is one specific blueprint that references only specific packages (or git checkouts) so that the whole thing is repeatable.

Pick a name for this particular "run", let's say "sleep01"

$ cloudinitd boot main.conf -n sleep01 -v -v -f /tmp/sleep01.log

The "boot" part is the subcommand (see "cloudinitd commands" for an explanation of each subcommand), main.conf is the "top" or "entry" into the launch-plan (this can be specified by path, you do not need to be in the same directory), and "-n" is how you specify the run name.

If you do not specify a run name it will pick one for you but we want to coordinate later with the epumgmt tool, so it is good to have an easy name.

IMPORTANT: Always use a new run name for each "boot" command, every time. There are some remnants from old runs in epumgmt (this is for post-mortem analysis).

The "-v -v -f /tmp/sleep01.log" part just tells it to put a lot of output to that file. You will not see too much output on the console, this is because there is the potential for many many services to be booted simultaneously in one boot level. A lot of output (especially logs about what is happening in each service) would be too much to take.

If you run into issues, make sure your SSH credentials are set up correctly. Look in the "level*" files and ensure it is specifying the private key file you want to use. Also make sure this SSH key is registered in the proper EC2 region (east vs. west are different) with the right name (stick with "ooi").

If you see three levels of success, great! If you don't, your best recourse at the moment is to contact the authors.

Once the third boot level succeeds, an EPU controller for the sleeper service has launched (see the main page for an overview). And because the sample configuration ensures one worker is running immediately, one worker will be started right away.

Important: you have launched a system with cloudinit.d, the EPU system. This can be status checked and terminated on its own. But the EPU system is itself starting worker VMs. To examine these or terminate them, you need the epumgmt tool.

Run the following command:

$ ./bin/epumgmt.sh -a find-workers -n sleep01

What this will do is examine the provisioner for any worker instance launches.

After that you could run this to retrieve logs from the system:

$ ./bin/epumgmt.sh -a logfetch -n sleep01

What this will do is transfer all the logfiles from all the capability containers in the system. You can examine events this way (there is even a special event format for this that we use to make parsing easier).

Now to terminate the system, run:

$ ./bin/epumgmt.sh -a killrun -n sleep01

This kills the workers (technically the provisioner kills the workers) and also instructs cloudinit.d (via API) to tear down the EPU system.

Clone this wiki locally