Skip to content

Commit

Permalink
Merge pull request #592 from dachary/wip-6502-openstack-v3
Browse files Browse the repository at this point in the history
transparent OpenStack provisioning for teuthology-suite
  • Loading branch information
zmc committed Sep 2, 2015
2 parents 4ec4652 + 74ed108 commit 36d148e
Show file tree
Hide file tree
Showing 45 changed files with 2,647 additions and 23 deletions.
138 changes: 138 additions & 0 deletions README.rst
Expand Up @@ -320,6 +320,144 @@ specified in ``$HOME/.teuthology.yaml``::

test_path: <directory>

OpenStack backend
=================

The ``teuthology-openstack`` command is a wrapper around
``teuthology-suite`` that transparently creates the teuthology cluster
using OpenStack virtual machines.

Prerequisites
-------------

An OpenStack tenant with access to the nova and cinder API (for
instance http://entercloudsuite.com/). If the cinder API is not
available (for instance https://www.ovh.com/fr/cloud/), some jobs
won't run because they expect volumes attached to each instance.

Setup OpenStack at Enter Cloud Suite
------------------------------------

* create an account and `login the dashboard <https://dashboard.entercloudsuite.com/>`_
* `create an Ubuntu 14.04 instance
<https://dashboard.entercloudsuite.com/console/index#/launch-instance>`_
with 1GB RAM and a public IP and destroy it immediately afterwards.
* get $HOME/openrc.sh from `the horizon dashboard <https://horizon.entercloudsuite.com/project/access_and_security/?tab=access_security_tabs__api_access_tab>`_

The creation/destruction of an instance via the dashboard is the
shortest path to create the network, subnet and router that would
otherwise need to be created via the neutron API.

Setup OpenStack at OVH
----------------------

It is cheaper than EnterCloudSuite but does not provide volumes (as
of August 2015) and is therefore unfit to run teuthology tests that
require disks attached to the instance. Each instance has a public IP
by default.

* `create an account <https://www.ovh.com/fr/support/new_nic.xml>`_
* get $HOME/openrc.sh from `the horizon dashboard <https://horizon.cloud.ovh.net/project/access_and_security/?tab=access_security_tabs__api_access_tab>`_

Setup
-----

* Get and configure teuthology::

$ git clone -b wip-6502-openstack-v3 http://github.com/dachary/teuthology
$ cd teuthology ; ./bootstrap install
$ source virtualenv/bin/activate

Get OpenStack credentials and test it
-------------------------------------

* follow the `OpenStack API Quick Start <http://docs.openstack.org/api/quick-start/content/index.html#cli-intro>`_
* source $HOME/openrc.sh
* verify the OpenStack client works::

$ nova list
+----+------------+--------+------------+-------------+-------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------------+--------+------------+-------------+-------------------------+
+----+------------+--------+------------+-------------+-------------------------+
* create a passwordless ssh public key with::

$ openstack keypair create myself > myself.pem
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | e0:a3:ab:5f:01:54:5c:1d:19:40:d9:62:b4:b3:a1:0b |
| name | myself |
| user_id | 5cf9fa21b2e9406b9c4108c42aec6262 |
+-------------+-------------------------------------------------+
$ chmod 600 myself.pem

Usage
-----

* Create a passwordless ssh public key::

$ openstack keypair create myself > myself.pem
$ chmod 600 myself.pem

* Run the dummy suite (it does nothing useful but shows all works as
expected)::

$ teuthology-openstack --key-filename myself.pem --key-name myself --suite dummy
Job scheduled with name ubuntu-2015-07-24_09:03:29-dummy-master---basic-openstack and ID 1
2015-07-24 09:03:30,520.520 INFO:teuthology.suite:ceph sha1: dedda6245ce8db8828fdf2d1a2bfe6163f1216a1
2015-07-24 09:03:31,620.620 INFO:teuthology.suite:ceph version: v9.0.2-829.gdedda62
2015-07-24 09:03:31,620.620 INFO:teuthology.suite:teuthology branch: master
2015-07-24 09:03:32,196.196 INFO:teuthology.suite:ceph-qa-suite branch: master
2015-07-24 09:03:32,197.197 INFO:teuthology.repo_utils:Fetching from upstream into /home/ubuntu/src/ceph-qa-suite_master
2015-07-24 09:03:33,096.096 INFO:teuthology.repo_utils:Resetting repo at /home/ubuntu/src/ceph-qa-suite_master to branch master
2015-07-24 09:03:33,157.157 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy generated 1 jobs (not yet filtered)
2015-07-24 09:03:33,158.158 INFO:teuthology.suite:Scheduling dummy/{all/nop.yaml}
2015-07-24 09:03:34,045.045 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy scheduled 1 jobs.
2015-07-24 09:03:34,046.046 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy -- 0 jobs were filtered out.

2015-07-24 11:03:34,104.104 INFO:teuthology.openstack:
web interface: http://167.114.242.13:8081/
ssh access : ssh ubuntu@167.114.242.13 # logs in /usr/share/nginx/html

* Visit the web interface (the URL is displayed at the end of the
teuthology-openstack output) to monitor the progress of the suite.

* The virtual machine running the suite will persist for forensic
analysis purposes. To destroy it run::

$ teuthology-openstack --key-filename myself.pem --key-name myself --teardown

* The test results can be uploaded to a publicly accessible location
with the ``--upload`` flag::

$ teuthology-openstack --key-filename myself.pem --key-name myself \
--suite dummy --upload

Running the OpenStack backend integration tests
-----------------------------------------------

The easiest way to run the integration tests is to first run a dummy suite::

$ teuthology-openstack --key-name myself --suite dummy
...
ssh access : ssh ubuntu@167.114.242.13

This will create a virtual machine suitable for the integration
test. Login wih the ssh access displayed at the end of the
``teuthology-openstack`` command and run the following::

$ pkill -f teuthology-worker
$ cd teuthology ; pip install "tox>=1.9"
$ tox -v -e openstack-integration
integration/openstack-integration.py::TestSuite::test_suite_noop PASSED
...
========= 9 passed in 2545.51 seconds ========
$ tox -v -e openstack
integration/test_openstack.py::TestTeuthologyOpenStack::test_create PASSED
...
========= 1 passed in 204.35 seconds =========

VIRTUAL MACHINE SUPPORT
=======================
Expand Down
2 changes: 1 addition & 1 deletion bootstrap
Expand Up @@ -27,7 +27,7 @@ Linux)
# C) Adding "Precise" conditionals somewhere, eg. conditionalizing
# this bootstrap script to only use the python-libvirt package on
# Ubuntu Precise.
for package in python-dev libssl-dev python-pip python-virtualenv libevent-dev python-libvirt libmysqlclient-dev libffi-dev; do
for package in python-dev libssl-dev python-pip python-virtualenv libevent-dev python-libvirt libmysqlclient-dev libffi-dev libyaml-dev libpython-dev ; do
if [ "$(dpkg --status -- $package|sed -n 's/^Status: //p')" != "install ok installed" ]; then
# add a space after old values
missing="${missing:+$missing }$package"
Expand Down
72 changes: 72 additions & 0 deletions docs/siteconfig.rst
Expand Up @@ -109,3 +109,75 @@ Here is a sample configuration with many of the options set and documented::
# armv7l
# etc.
baseurl_template: http://{host}/{proj}-{pkg_type}-{dist}-{arch}-{flavor}/{uri}

# The OpenStack backend configuration, a dictionary interpreted as follows
#
openstack:

# The teuthology-openstack command will clone teuthology with
# this command for the purpose of deploying teuthology from
# scratch and run workers listening on the openstack tube
#
clone: git clone -b wip-6502-openstack-v3 http://github.com/dachary/teuthology

# The path to the user-data file used when creating a target. It can have
# the {os_type} and {os_version} placeholders which are replaced with
# the value of --os-type and --os-version. No instance of a give {os_type}
# and {os_version} combination can be created unless such a file exists.
#
user-data: teuthology/openstack/openstack-{os_type}-{os_version}-user-data.txt
# The IP number of the instance running the teuthology cluster. It will
# be used to build user facing URLs and should usually be the floating IP
# associated with the instance running the pulpito server.
#
ip: 8.4.8.4

# OpenStack has predefined machine sizes (called flavors)
# For a given job requiring N machines, the following example select
# the smallest flavor that satisfies these requirements. For instance
# If there are three flavors
#
# F1 (10GB disk, 2000MB RAM, 1CPU)
# F2 (100GB disk, 7000MB RAM, 1CPU)
# F3 (50GB disk, 7000MB RAM, 1CPU)
#
# and machine: { disk: 40, ram: 7000, cpus: 1 }, F3 will be chosen.
# F1 does not have enough RAM (2000 instead of the 7000 minimum) and
# although F2 satisfies all the requirements, it is larger than F3
# (100GB instead of 50GB) and presumably more expensive.
#
# This configuration applies to all instances created for teuthology jobs
# that do not redefine these values.
#
machine:
# The minimum root disk size of the flavor, in GB
#
disk: 20 # GB

# The minimum RAM size of the flavor, in MB
#
ram: 8000 # MB

# The minimum number of vCPUS of the flavor
#
cpus: 1

# The volumes attached to each instance. In the following example,
# three volumes of 10 GB will be created for each instanced and
# will show as /dev/vdb, /dev/vdc and /dev/vdd
#
#
# This configuration applies to all instances created for teuthology jobs
# that do not redefine these values.
#
volumes:

# The number of volumes
#
count: 3
# The size of each volume, in GB
#
size: 10 # GB
161 changes: 161 additions & 0 deletions scripts/openstack.py
@@ -0,0 +1,161 @@
import argparse
import sys

import teuthology.openstack


def main(argv=sys.argv[1:]):
teuthology.openstack.main(parse_args(argv), argv)


def parse_args(argv):
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description="""
Run a suite of ceph integration tests. A suite is a directory containing
facets. A facet is a directory containing config snippets. Running a suite
means running teuthology for every configuration combination generated by
taking one config snippet from each facet. Any config files passed on the
command line will be used for every combination, and will override anything in
the suite. By specifying a subdirectory in the suite argument, it is possible
to limit the run to a specific facet. For instance -s upgrade/dumpling-x only
runs the dumpling-x facet of the upgrade suite.
Display the http and ssh access to follow the progress of the suite
and analyze results.
firefox http://183.84.234.3:8081/
ssh -i teuthology-admin.pem ubuntu@183.84.234.3
""")
parser.add_argument(
'-v', '--verbose',
action='store_true', default=None,
help='be more verbose',
)
parser.add_argument(
'--name',
help='OpenStack primary instance name',
default='teuthology',
)
parser.add_argument(
'--key-name',
help='OpenStack keypair name',
required=True,
)
parser.add_argument(
'--key-filename',
help='path to the ssh private key',
)
parser.add_argument(
'--simultaneous-jobs',
help='maximum number of jobs running in parallel',
type=int,
default=2,
)
parser.add_argument(
'--teardown',
action='store_true', default=None,
help='destroy the cluster, if it exists',
)
parser.add_argument(
'--upload',
action='store_true', default=False,
help='upload archives to an rsync server',
)
parser.add_argument(
'--archive-upload',
help='rsync destination to upload archives',
default='ubuntu@teuthology-logs.public.ceph.com:./',
)
# copy/pasted from scripts/suite.py
parser.add_argument(
'config_yaml',
nargs='*',
help='Optional extra job yaml to include',
)
parser.add_argument(
'--dry-run',
action='store_true', default=None,
help='Do a dry run; do not schedule anything',
)
parser.add_argument(
'-s', '--suite',
help='The suite to schedule',
)
parser.add_argument(
'-c', '--ceph',
help='The ceph branch to run against',
default='master',
)
parser.add_argument(
'-k', '--kernel',
help=('The kernel branch to run against; if not '
'supplied, the installed kernel is unchanged'),
)
parser.add_argument(
'-f', '--flavor',
help=("The kernel flavor to run against: ('basic',"
"'gcov', 'notcmalloc')"),
default='basic',
)
parser.add_argument(
'-d', '--distro',
help='Distribution to run against',
)
parser.add_argument(
'--suite-branch',
help='Use this suite branch instead of the ceph branch',
)
parser.add_argument(
'-e', '--email',
help='When tests finish or time out, send an email here',
)
parser.add_argument(
'-N', '--num',
help='Number of times to run/queue the job',
type=int,
default=1,
)
parser.add_argument(
'-l', '--limit',
metavar='JOBS',
help='Queue at most this many jobs',
type=int,
)
parser.add_argument(
'--subset',
help=('Instead of scheduling the entire suite, break the '
'set of jobs into <outof> pieces (each of which will '
'contain each facet at least once) and schedule '
'piece <index>. Scheduling 0/<outof>, 1/<outof>, '
'2/<outof> ... <outof>-1/<outof> will schedule all '
'jobs in the suite (many more than once).')
)
parser.add_argument(
'-p', '--priority',
help='Job priority (lower is sooner)',
type=int,
default=1000,
)
parser.add_argument(
'--timeout',
help=('How long, in seconds, to wait for jobs to finish '
'before sending email. This does not kill jobs.'),
type=int,
default=43200,
)
parser.add_argument(
'--filter',
help=('Only run jobs whose description contains at least one '
'of the keywords in the comma separated keyword '
'string specified. ')
)
parser.add_argument(
'--filter-out',
help=('Do not run jobs whose description contains any of '
'the keywords in the comma separated keyword '
'string specified. ')
)

return parser.parse_args(argv)

0 comments on commit 36d148e

Please sign in to comment.