Skip to content

Commit

Permalink
transparent OpenStack provisioning for teuthology-suite
Browse files Browse the repository at this point in the history
The teuthology-openstack command is a wrapper around teuthology-suite
that transparently creates the teuthology cluster using OpenStack
virtual machine.

For machines of machine_type == openstack in paddles, when locking a
machine, an instance is created in the matching OpenStack cluster with:

   openstack server create redhat

And renamed into redhat042010 if assigned the IP x.x.42.10/16. It is then
locked in paddles which has been prepare with one slot for each
available IP in the range.

An OpenStack cluster is defined in the .teuthology.yaml file as follows:

openstack:
  user-data: teuthology/openstack/openstack-{os_type}-{os_version}-user-data.txt
  nameserver: 167.114.252.136
  machine:
    disk: 10 # GB
    ram: 7000 # MB
    cpus: 1
  volumes:
    count: 0
    size: 1 # GB
  flavor-select-regexp: ^vps-ssd
  subnet: 167.114.224.0/19

When the machine is unlocked, it is destroyed with

   openstack server delete redhat042010

The python-openstackclient command line is used instead of the
corresponding API because it is well maintained and documented.

Integration tests require an OpenStack tenant.

http://tracker.ceph.com/issues/6502 Fixes: #6502

Signed-off-by: Loic Dachary <loic@dachary.org>
  • Loading branch information
ldachary committed Sep 2, 2015
1 parent 4ec4652 commit 26e140e
Show file tree
Hide file tree
Showing 35 changed files with 2,355 additions and 13 deletions.
122 changes: 122 additions & 0 deletions README.rst
Expand Up @@ -320,6 +320,128 @@ specified in ``$HOME/.teuthology.yaml``::

test_path: <directory>

OpenStack backend
=================

The ``teuthology-openstack`` command is a wrapper around
``teuthology-suite`` that transparently creates the teuthology cluster
using OpenStack virtual machines.

Prerequisites
-------------

An OpenStack tenant with access to the nova and cinder API (for
instance http://entercloudsuite.com/). If the cinder API is not
available (for instance https://www.ovh.com/fr/cloud/), some jobs
won't run because they expect volumes attached to each instance.

Setup OpenStack at Enter Cloud Suite
------------------------------------

* create an account and `login the dashboard <https://dashboard.entercloudsuite.com/>`_
* `create an Ubuntu 14.04 instance
<https://dashboard.entercloudsuite.com/console/index#/launch-instance>`_
with 1GB RAM and a public IP and destroy it immediately afterwards.
* get $HOME/openrc.sh from `the horizon dashboard <https://horizon.entercloudsuite.com/project/access_and_security/?tab=access_security_tabs__api_access_tab>`_

The creation/destruction of an instance via the dashboard is the
shortest path to create the network, subnet and router that would
otherwise need to be created via the neutron API.

Setup OpenStack at OVH
----------------------

It is cheaper than EnterCloudSuite but does not provide volumes (as
of August 2015) and is therefore unfit to run teuthology tests that
require disks attached to the instance. Each instance has a public IP
by default.

* `create an account <https://www.ovh.com/fr/support/new_nic.xml>`_
* get $HOME/openrc.sh from `the horizon dashboard <https://horizon.cloud.ovh.net/project/access_and_security/?tab=access_security_tabs__api_access_tab>`_

Setup
-----

* Get and configure teuthology::

$ git clone -b wip-6502-openstack-v3 http://github.com/dachary/teuthology
$ cd teuthology ; ./bootstrap install
$ source virtualenv/bin/activate

Get OpenStack credentials and test it
-------------------------------------

* follow the `OpenStack API Quick Start <http://docs.openstack.org/api/quick-start/content/index.html#cli-intro>`_
* source $HOME/openrc.sh
* verify the OpenStack client works::

$ nova list
+----+------------+--------+------------+-------------+-------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+----+------------+--------+------------+-------------+-------------------------+
+----+------------+--------+------------+-------------+-------------------------+
* upload your ssh public key with::

$ openstack keypair create --public-key ~/.ssh/id_rsa.pub myself
+-------------+-------------------------------------------------+
| Field | Value |
+-------------+-------------------------------------------------+
| fingerprint | e0:a3:ab:5f:01:54:5c:1d:19:40:d9:62:b4:b3:a1:0b |
| name | myself |
| user_id | 5cf9fa21b2e9406b9c4108c42aec6262 |
+-------------+-------------------------------------------------+

Usage
-----

* Run the dummy suite as a test (``myself`` is a keypair created as
explained in the previous section)::

$ teuthology-openstack --key-name myself --suite dummy
Job scheduled with name ubuntu-2015-07-24_09:03:29-dummy-master---basic-openstack and ID 1
2015-07-24 09:03:30,520.520 INFO:teuthology.suite:ceph sha1: dedda6245ce8db8828fdf2d1a2bfe6163f1216a1
2015-07-24 09:03:31,620.620 INFO:teuthology.suite:ceph version: v9.0.2-829.gdedda62
2015-07-24 09:03:31,620.620 INFO:teuthology.suite:teuthology branch: master
2015-07-24 09:03:32,196.196 INFO:teuthology.suite:ceph-qa-suite branch: master
2015-07-24 09:03:32,197.197 INFO:teuthology.repo_utils:Fetching from upstream into /home/ubuntu/src/ceph-qa-suite_master
2015-07-24 09:03:33,096.096 INFO:teuthology.repo_utils:Resetting repo at /home/ubuntu/src/ceph-qa-suite_master to branch master
2015-07-24 09:03:33,157.157 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy generated 1 jobs (not yet filtered)
2015-07-24 09:03:33,158.158 INFO:teuthology.suite:Scheduling dummy/{all/nop.yaml}
2015-07-24 09:03:34,045.045 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy scheduled 1 jobs.
2015-07-24 09:03:34,046.046 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy -- 0 jobs were filtered out.

2015-07-24 11:03:34,104.104 INFO:teuthology.openstack:
web interface: http://167.114.242.13:8081/
ssh access : ssh ubuntu@167.114.242.13 # logs in /usr/share/nginx/html

* Visit the web interface (the URL is displayed at the end of the
teuthology-openstack output) to monitor the progress of the suite.

* The virtual machine running the suite will persist for forensic
analysis purposes. To destroy it run::

$ teuthology-openstack --key-name myself --teardown

Running the OpenStack backend integration tests
-----------------------------------------------

The easiest way to run the integration tests is to first run a dummy suite::

$ teuthology-openstack --key-name myself --suite dummy

This will create a virtual machine suitable for running the
integration tests. Once logged in the virtual machine:

$ pkill -f teuthology-worker
$ cd teuthology ; pip install "tox>=1.9"
$ tox -v -e openstack-integration
integration/openstack-integration.py::TestSuite::test_suite_noop PASSED
...
========= 9 passed in 2545.51 seconds ========
$ tox -v -e openstack
integration/test_openstack.py::TestTeuthologyOpenStack::test_create PASSED
...
========= 1 passed in 204.35 seconds =========

VIRTUAL MACHINE SUPPORT
=======================
Expand Down
2 changes: 1 addition & 1 deletion bootstrap
Expand Up @@ -27,7 +27,7 @@ Linux)
# C) Adding "Precise" conditionals somewhere, eg. conditionalizing
# this bootstrap script to only use the python-libvirt package on
# Ubuntu Precise.
for package in python-dev libssl-dev python-pip python-virtualenv libevent-dev python-libvirt libmysqlclient-dev libffi-dev; do
for package in python-dev libssl-dev python-pip python-virtualenv libevent-dev python-libvirt libmysqlclient-dev libffi-dev libyaml-dev libpython-dev ; do
if [ "$(dpkg --status -- $package|sed -n 's/^Status: //p')" != "install ok installed" ]; then
# add a space after old values
missing="${missing:+$missing }$package"
Expand Down
151 changes: 151 additions & 0 deletions scripts/openstack.py
@@ -0,0 +1,151 @@
import argparse
import sys

import teuthology.openstack


def main(argv=sys.argv[1:]):
teuthology.openstack.main(parse_args(argv), argv)


def parse_args(argv):
parser = argparse.ArgumentParser(
formatter_class=argparse.RawDescriptionHelpFormatter,
description="""
Run a suite of ceph integration tests. A suite is a directory containing
facets. A facet is a directory containing config snippets. Running a suite
means running teuthology for every configuration combination generated by
taking one config snippet from each facet. Any config files passed on the
command line will be used for every combination, and will override anything in
the suite. By specifying a subdirectory in the suite argument, it is possible
to limit the run to a specific facet. For instance -s upgrade/dumpling-x only
runs the dumpling-x facet of the upgrade suite.
Display the http and ssh access to follow the progress of the suite
and analyze results.
firefox http://183.84.234.3:8081/
ssh -i teuthology-admin.pem ubuntu@183.84.234.3
""")
parser.add_argument(
'-v', '--verbose',
action='store_true', default=None,
help='be more verbose',
)
parser.add_argument(
'--name',
help='OpenStack primary instance name',
default='teuthology',
)
parser.add_argument(
'--key-name',
help='OpenStack keypair name',
required=True,
)
parser.add_argument(
'--key-filename',
help='path to the ssh private key',
)
parser.add_argument(
'--simultaneous-jobs',
help='maximum number of jobs running in parallel',
type=int,
default=2,
)
parser.add_argument(
'--teardown',
action='store_true', default=None,
help='destroy the cluster, if it exists',
)
# copy/pasted from scripts/suite.py
parser.add_argument(
'config_yaml',
nargs='*',
help='Optional extra job yaml to include',
)
parser.add_argument(
'--dry-run',
action='store_true', default=None,
help='Do a dry run; do not schedule anything',
)
parser.add_argument(
'-s', '--suite',
help='The suite to schedule',
)
parser.add_argument(
'-c', '--ceph',
help='The ceph branch to run against',
default='master',
)
parser.add_argument(
'-k', '--kernel',
help=('The kernel branch to run against; if not '
'supplied, the installed kernel is unchanged'),
)
parser.add_argument(
'-f', '--flavor',
help=("The kernel flavor to run against: ('basic',"
"'gcov', 'notcmalloc')"),
default='basic',
)
parser.add_argument(
'-d', '--distro',
help='Distribution to run against',
)
parser.add_argument(
'--suite-branch',
help='Use this suite branch instead of the ceph branch',
)
parser.add_argument(
'-e', '--email',
help='When tests finish or time out, send an email here',
)
parser.add_argument(
'-N', '--num',
help='Number of times to run/queue the job',
type=int,
default=1,
)
parser.add_argument(
'-l', '--limit',
metavar='JOBS',
help='Queue at most this many jobs',
type=int,
)
parser.add_argument(
'--subset',
help=('Instead of scheduling the entire suite, break the '
'set of jobs into <outof> pieces (each of which will '
'contain each facet at least once) and schedule '
'piece <index>. Scheduling 0/<outof>, 1/<outof>, '
'2/<outof> ... <outof>-1/<outof> will schedule all '
'jobs in the suite (many more than once).')
)
parser.add_argument(
'-p', '--priority',
help='Job priority (lower is sooner)',
type=int,
default=1000,
)
parser.add_argument(
'--timeout',
help=('How long, in seconds, to wait for jobs to finish '
'before sending email. This does not kill jobs.'),
type=int,
default=43200,
)
parser.add_argument(
'--filter',
help=('Only run jobs whose description contains at least one '
'of the keywords in the comma separated keyword '
'string specified. ')
)
parser.add_argument(
'--filter-out',
help=('Do not run jobs whose description contains any of '
'the keywords in the comma separated keyword '
'string specified. ')
)

return parser.parse_args(argv)
4 changes: 2 additions & 2 deletions scripts/suite.py
Expand Up @@ -74,10 +74,10 @@
--timeout <timeout> How long, in seconds, to wait for jobs to finish
before sending email. This does not kill jobs.
[default: {default_results_timeout}]
--filter KEYWORDS Only run jobs whose name contains at least one
--filter KEYWORDS Only run jobs whose description contains at least one
of the keywords in the comma separated keyword
string specified.
--filter-out KEYWORDS Do not run jobs whose name contains any of
--filter-out KEYWORDS Do not run jobs whose description contains any of
the keywords in the comma separated keyword
string specified.
""".format(default_machine_type=config.default_machine_type,
Expand Down
4 changes: 3 additions & 1 deletion setup.py
Expand Up @@ -40,7 +40,7 @@
'boto >= 2.0b4',
'bunch >= 1.0.0',
'configobj',
'six',
'six >= 1.9', # python-openstackclient won't work properly with less
'httplib2',
'paramiko < 1.8',
'pexpect',
Expand All @@ -55,6 +55,7 @@
'pyopenssl>=0.13',
'ndg-httpsclient',
'pyasn1',
'python-openstackclient',
],


Expand All @@ -64,6 +65,7 @@
entry_points={
'console_scripts': [
'teuthology = scripts.run:main',
'teuthology-openstack = scripts.openstack:main',
'teuthology-nuke = scripts.nuke:main',
'teuthology-suite = scripts.suite:main',
'teuthology-ls = scripts.ls:main',
Expand Down
23 changes: 22 additions & 1 deletion teuthology/lock.py
Expand Up @@ -369,6 +369,22 @@ def main(ctx):
return ret


def lock_many_openstack(ctx, num, machine_type, user=None, description=None,
arch=None):
os_type = provision.get_distro(ctx)
os_version = provision.get_distro_version(ctx)
if hasattr(ctx, 'config'):
resources_hint = ctx.config.get('openstack')
else:
resources_hint = None
machines = provision.ProvisionOpenStack().create(
num, os_type, os_version, arch, resources_hint)
result = {}
for machine in machines:
lock_one(machine, user, description)
result[machine] = None # we do not collect ssh host keys yet
return result

def lock_many(ctx, num, machine_type, user=None, description=None,
os_type=None, os_version=None, arch=None):
if user is None:
Expand All @@ -385,6 +401,11 @@ def lock_many(ctx, num, machine_type, user=None, description=None,
machine_types_list = misc.get_multi_machine_types(machine_type)
if machine_types_list == ['vps']:
machine_types = machine_types_list
elif machine_types_list == ['openstack']:
return lock_many_openstack(ctx, num, machine_type,
user=user,
description=description,
arch=arch)
elif 'vps' in machine_types_list:
machine_types_non_vps = list(machine_types_list)
machine_types_non_vps.remove('vps')
Expand Down Expand Up @@ -488,7 +509,7 @@ def unlock_many(names, user):
def unlock_one(ctx, name, user, description=None):
name = misc.canonicalize_hostname(name, user=None)
if not provision.destroy_if_vm(ctx, name, user, description):
log.error('downburst destroy failed for %s', name)
log.error('destroy failed for %s', name)
request = dict(name=name, locked=False, locked_by=user,
description=description)
uri = os.path.join(config.lock_server, 'nodes', name, 'lock', '')
Expand Down
9 changes: 7 additions & 2 deletions teuthology/nuke.py
Expand Up @@ -14,6 +14,7 @@
from .lock import list_locks
from .lock import unlock_one
from .lock import find_stale_locks
from .lockstatus import get_status
from .misc import config_file
from .misc import merge_configs
from .misc import get_testdir
Expand Down Expand Up @@ -488,8 +489,12 @@ def nuke_helper(ctx, should_unlock):
(target,) = ctx.config['targets'].keys()
host = target.split('@')[-1]
shortname = host.split('.')[0]
if should_unlock and 'vpm' in shortname:
return
if should_unlock:
if 'vpm' in shortname:
return
status_info = get_status(host)
if status_info['is_vm'] and status_info['machine_type'] == 'openstack':
return
log.debug('shortname: %s' % shortname)
log.debug('{ctx}'.format(ctx=ctx))
if (not ctx.noipmi and 'ipmi_user' in ctx.teuthology_config and
Expand Down

0 comments on commit 26e140e

Please sign in to comment.