diff --git a/README.rst b/README.rst index fd0d963903..be041a4b45 100644 --- a/README.rst +++ b/README.rst @@ -320,6 +320,144 @@ specified in ``$HOME/.teuthology.yaml``:: test_path: +OpenStack backend +================= + +The ``teuthology-openstack`` command is a wrapper around +``teuthology-suite`` that transparently creates the teuthology cluster +using OpenStack virtual machines. + +Prerequisites +------------- + +An OpenStack tenant with access to the nova and cinder API (for +instance http://entercloudsuite.com/). If the cinder API is not +available (for instance https://www.ovh.com/fr/cloud/), some jobs +won't run because they expect volumes attached to each instance. + +Setup OpenStack at Enter Cloud Suite +------------------------------------ + +* create an account and `login the dashboard `_ +* `create an Ubuntu 14.04 instance + `_ + with 1GB RAM and a public IP and destroy it immediately afterwards. +* get $HOME/openrc.sh from `the horizon dashboard `_ + +The creation/destruction of an instance via the dashboard is the +shortest path to create the network, subnet and router that would +otherwise need to be created via the neutron API. + +Setup OpenStack at OVH +---------------------- + +It is cheaper than EnterCloudSuite but does not provide volumes (as +of August 2015) and is therefore unfit to run teuthology tests that +require disks attached to the instance. Each instance has a public IP +by default. + +* `create an account `_ +* get $HOME/openrc.sh from `the horizon dashboard `_ + +Setup +----- + +* Get and configure teuthology:: + + $ git clone -b wip-6502-openstack-v3 http://github.com/dachary/teuthology + $ cd teuthology ; ./bootstrap install + $ source virtualenv/bin/activate + +Get OpenStack credentials and test it +------------------------------------- + +* follow the `OpenStack API Quick Start `_ +* source $HOME/openrc.sh +* verify the OpenStack client works:: + + $ nova list + +----+------------+--------+------------+-------------+-------------------------+ + | ID | Name | Status | Task State | Power State | Networks | + +----+------------+--------+------------+-------------+-------------------------+ + +----+------------+--------+------------+-------------+-------------------------+ +* create a passwordless ssh public key with:: + + $ openstack keypair create myself > myself.pem + +-------------+-------------------------------------------------+ + | Field | Value | + +-------------+-------------------------------------------------+ + | fingerprint | e0:a3:ab:5f:01:54:5c:1d:19:40:d9:62:b4:b3:a1:0b | + | name | myself | + | user_id | 5cf9fa21b2e9406b9c4108c42aec6262 | + +-------------+-------------------------------------------------+ + $ chmod 600 myself.pem + +Usage +----- + +* Create a passwordless ssh public key:: + + $ openstack keypair create myself > myself.pem + $ chmod 600 myself.pem + +* Run the dummy suite (it does nothing useful but shows all works as + expected):: + + $ teuthology-openstack --key-filename myself.pem --key-name myself --suite dummy + Job scheduled with name ubuntu-2015-07-24_09:03:29-dummy-master---basic-openstack and ID 1 + 2015-07-24 09:03:30,520.520 INFO:teuthology.suite:ceph sha1: dedda6245ce8db8828fdf2d1a2bfe6163f1216a1 + 2015-07-24 09:03:31,620.620 INFO:teuthology.suite:ceph version: v9.0.2-829.gdedda62 + 2015-07-24 09:03:31,620.620 INFO:teuthology.suite:teuthology branch: master + 2015-07-24 09:03:32,196.196 INFO:teuthology.suite:ceph-qa-suite branch: master + 2015-07-24 09:03:32,197.197 INFO:teuthology.repo_utils:Fetching from upstream into /home/ubuntu/src/ceph-qa-suite_master + 2015-07-24 09:03:33,096.096 INFO:teuthology.repo_utils:Resetting repo at /home/ubuntu/src/ceph-qa-suite_master to branch master + 2015-07-24 09:03:33,157.157 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy generated 1 jobs (not yet filtered) + 2015-07-24 09:03:33,158.158 INFO:teuthology.suite:Scheduling dummy/{all/nop.yaml} + 2015-07-24 09:03:34,045.045 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy scheduled 1 jobs. + 2015-07-24 09:03:34,046.046 INFO:teuthology.suite:Suite dummy in /home/ubuntu/src/ceph-qa-suite_master/suites/dummy -- 0 jobs were filtered out. + + 2015-07-24 11:03:34,104.104 INFO:teuthology.openstack: + web interface: http://167.114.242.13:8081/ + ssh access : ssh ubuntu@167.114.242.13 # logs in /usr/share/nginx/html + +* Visit the web interface (the URL is displayed at the end of the + teuthology-openstack output) to monitor the progress of the suite. + +* The virtual machine running the suite will persist for forensic + analysis purposes. To destroy it run:: + + $ teuthology-openstack --key-filename myself.pem --key-name myself --teardown + +* The test results can be uploaded to a publicly accessible location + with the ``--upload`` flag:: + + $ teuthology-openstack --key-filename myself.pem --key-name myself \ + --suite dummy --upload + + +Running the OpenStack backend integration tests +----------------------------------------------- + +The easiest way to run the integration tests is to first run a dummy suite:: + + $ teuthology-openstack --key-name myself --suite dummy + ... + ssh access : ssh ubuntu@167.114.242.13 + +This will create a virtual machine suitable for the integration +test. Login wih the ssh access displayed at the end of the +``teuthology-openstack`` command and run the following:: + + $ pkill -f teuthology-worker + $ cd teuthology ; pip install "tox>=1.9" + $ tox -v -e openstack-integration + integration/openstack-integration.py::TestSuite::test_suite_noop PASSED + ... + ========= 9 passed in 2545.51 seconds ======== + $ tox -v -e openstack + integration/test_openstack.py::TestTeuthologyOpenStack::test_create PASSED + ... + ========= 1 passed in 204.35 seconds ========= VIRTUAL MACHINE SUPPORT ======================= diff --git a/bootstrap b/bootstrap index 8aa9456334..b87e6f2f67 100755 --- a/bootstrap +++ b/bootstrap @@ -27,7 +27,7 @@ Linux) # C) Adding "Precise" conditionals somewhere, eg. conditionalizing # this bootstrap script to only use the python-libvirt package on # Ubuntu Precise. - for package in python-dev libssl-dev python-pip python-virtualenv libevent-dev python-libvirt libmysqlclient-dev libffi-dev; do + for package in python-dev libssl-dev python-pip python-virtualenv libevent-dev python-libvirt libmysqlclient-dev libffi-dev libyaml-dev libpython-dev ; do if [ "$(dpkg --status -- $package|sed -n 's/^Status: //p')" != "install ok installed" ]; then # add a space after old values missing="${missing:+$missing }$package" diff --git a/docs/siteconfig.rst b/docs/siteconfig.rst index d0ab2643f0..6ae7f13c96 100644 --- a/docs/siteconfig.rst +++ b/docs/siteconfig.rst @@ -109,3 +109,75 @@ Here is a sample configuration with many of the options set and documented:: # armv7l # etc. baseurl_template: http://{host}/{proj}-{pkg_type}-{dist}-{arch}-{flavor}/{uri} + + # The OpenStack backend configuration, a dictionary interpreted as follows + # + openstack: + + # The teuthology-openstack command will clone teuthology with + # this command for the purpose of deploying teuthology from + # scratch and run workers listening on the openstack tube + # + clone: git clone -b wip-6502-openstack-v3 http://github.com/dachary/teuthology + + # The path to the user-data file used when creating a target. It can have + # the {os_type} and {os_version} placeholders which are replaced with + # the value of --os-type and --os-version. No instance of a give {os_type} + # and {os_version} combination can be created unless such a file exists. + # + user-data: teuthology/openstack/openstack-{os_type}-{os_version}-user-data.txt + + # The IP number of the instance running the teuthology cluster. It will + # be used to build user facing URLs and should usually be the floating IP + # associated with the instance running the pulpito server. + # + ip: 8.4.8.4 + + # OpenStack has predefined machine sizes (called flavors) + # For a given job requiring N machines, the following example select + # the smallest flavor that satisfies these requirements. For instance + # If there are three flavors + # + # F1 (10GB disk, 2000MB RAM, 1CPU) + # F2 (100GB disk, 7000MB RAM, 1CPU) + # F3 (50GB disk, 7000MB RAM, 1CPU) + # + # and machine: { disk: 40, ram: 7000, cpus: 1 }, F3 will be chosen. + # F1 does not have enough RAM (2000 instead of the 7000 minimum) and + # although F2 satisfies all the requirements, it is larger than F3 + # (100GB instead of 50GB) and presumably more expensive. + # + # This configuration applies to all instances created for teuthology jobs + # that do not redefine these values. + # + machine: + + # The minimum root disk size of the flavor, in GB + # + disk: 20 # GB + + # The minimum RAM size of the flavor, in MB + # + ram: 8000 # MB + + # The minimum number of vCPUS of the flavor + # + cpus: 1 + + # The volumes attached to each instance. In the following example, + # three volumes of 10 GB will be created for each instanced and + # will show as /dev/vdb, /dev/vdc and /dev/vdd + # + # + # This configuration applies to all instances created for teuthology jobs + # that do not redefine these values. + # + volumes: + + # The number of volumes + # + count: 3 + + # The size of each volume, in GB + # + size: 10 # GB diff --git a/scripts/openstack.py b/scripts/openstack.py new file mode 100644 index 0000000000..4038efd2d9 --- /dev/null +++ b/scripts/openstack.py @@ -0,0 +1,161 @@ +import argparse +import sys + +import teuthology.openstack + + +def main(argv=sys.argv[1:]): + teuthology.openstack.main(parse_args(argv), argv) + + +def parse_args(argv): + parser = argparse.ArgumentParser( + formatter_class=argparse.RawDescriptionHelpFormatter, + description=""" +Run a suite of ceph integration tests. A suite is a directory containing +facets. A facet is a directory containing config snippets. Running a suite +means running teuthology for every configuration combination generated by +taking one config snippet from each facet. Any config files passed on the +command line will be used for every combination, and will override anything in +the suite. By specifying a subdirectory in the suite argument, it is possible +to limit the run to a specific facet. For instance -s upgrade/dumpling-x only +runs the dumpling-x facet of the upgrade suite. + +Display the http and ssh access to follow the progress of the suite +and analyze results. + + firefox http://183.84.234.3:8081/ + ssh -i teuthology-admin.pem ubuntu@183.84.234.3 + +""") + parser.add_argument( + '-v', '--verbose', + action='store_true', default=None, + help='be more verbose', + ) + parser.add_argument( + '--name', + help='OpenStack primary instance name', + default='teuthology', + ) + parser.add_argument( + '--key-name', + help='OpenStack keypair name', + required=True, + ) + parser.add_argument( + '--key-filename', + help='path to the ssh private key', + ) + parser.add_argument( + '--simultaneous-jobs', + help='maximum number of jobs running in parallel', + type=int, + default=2, + ) + parser.add_argument( + '--teardown', + action='store_true', default=None, + help='destroy the cluster, if it exists', + ) + parser.add_argument( + '--upload', + action='store_true', default=False, + help='upload archives to an rsync server', + ) + parser.add_argument( + '--archive-upload', + help='rsync destination to upload archives', + default='ubuntu@teuthology-logs.public.ceph.com:./', + ) + # copy/pasted from scripts/suite.py + parser.add_argument( + 'config_yaml', + nargs='*', + help='Optional extra job yaml to include', + ) + parser.add_argument( + '--dry-run', + action='store_true', default=None, + help='Do a dry run; do not schedule anything', + ) + parser.add_argument( + '-s', '--suite', + help='The suite to schedule', + ) + parser.add_argument( + '-c', '--ceph', + help='The ceph branch to run against', + default='master', + ) + parser.add_argument( + '-k', '--kernel', + help=('The kernel branch to run against; if not ' + 'supplied, the installed kernel is unchanged'), + ) + parser.add_argument( + '-f', '--flavor', + help=("The kernel flavor to run against: ('basic'," + "'gcov', 'notcmalloc')"), + default='basic', + ) + parser.add_argument( + '-d', '--distro', + help='Distribution to run against', + ) + parser.add_argument( + '--suite-branch', + help='Use this suite branch instead of the ceph branch', + ) + parser.add_argument( + '-e', '--email', + help='When tests finish or time out, send an email here', + ) + parser.add_argument( + '-N', '--num', + help='Number of times to run/queue the job', + type=int, + default=1, + ) + parser.add_argument( + '-l', '--limit', + metavar='JOBS', + help='Queue at most this many jobs', + type=int, + ) + parser.add_argument( + '--subset', + help=('Instead of scheduling the entire suite, break the ' + 'set of jobs into pieces (each of which will ' + 'contain each facet at least once) and schedule ' + 'piece . Scheduling 0/, 1/, ' + '2/ ... -1/ will schedule all ' + 'jobs in the suite (many more than once).') + ) + parser.add_argument( + '-p', '--priority', + help='Job priority (lower is sooner)', + type=int, + default=1000, + ) + parser.add_argument( + '--timeout', + help=('How long, in seconds, to wait for jobs to finish ' + 'before sending email. This does not kill jobs.'), + type=int, + default=43200, + ) + parser.add_argument( + '--filter', + help=('Only run jobs whose description contains at least one ' + 'of the keywords in the comma separated keyword ' + 'string specified. ') + ) + parser.add_argument( + '--filter-out', + help=('Do not run jobs whose description contains any of ' + 'the keywords in the comma separated keyword ' + 'string specified. ') + ) + + return parser.parse_args(argv) diff --git a/scripts/suite.py b/scripts/suite.py index 01b12e5e0e..5bcf9fdf24 100644 --- a/scripts/suite.py +++ b/scripts/suite.py @@ -74,12 +74,13 @@ --timeout How long, in seconds, to wait for jobs to finish before sending email. This does not kill jobs. [default: {default_results_timeout}] - --filter KEYWORDS Only run jobs whose name contains at least one + --filter KEYWORDS Only run jobs whose description contains at least one of the keywords in the comma separated keyword string specified. - --filter-out KEYWORDS Do not run jobs whose name contains any of + --filter-out KEYWORDS Do not run jobs whose description contains any of the keywords in the comma separated keyword string specified. + --archive-upload RSYNC_DEST Rsync destination to upload archives. """.format(default_machine_type=config.default_machine_type, default_results_timeout=config.results_timeout) diff --git a/setup.py b/setup.py index 4c256a8a56..646ad2d322 100644 --- a/setup.py +++ b/setup.py @@ -40,7 +40,7 @@ 'boto >= 2.0b4', 'bunch >= 1.0.0', 'configobj', - 'six', + 'six >= 1.9', # python-openstackclient won't work properly with less 'httplib2', 'paramiko < 1.8', 'pexpect', @@ -55,6 +55,7 @@ 'pyopenssl>=0.13', 'ndg-httpsclient', 'pyasn1', + 'python-openstackclient', ], @@ -64,6 +65,7 @@ entry_points={ 'console_scripts': [ 'teuthology = scripts.run:main', + 'teuthology-openstack = scripts.openstack:main', 'teuthology-nuke = scripts.nuke:main', 'teuthology-suite = scripts.suite:main', 'teuthology-ls = scripts.ls:main', diff --git a/teuthology/config.py b/teuthology/config.py index afb740b5c3..e456cc6c52 100644 --- a/teuthology/config.py +++ b/teuthology/config.py @@ -126,6 +126,8 @@ class TeuthologyConfig(YamlConfig): yaml_path = os.path.join(os.path.expanduser('~/.teuthology.yaml')) _defaults = { 'archive_base': '/var/lib/teuthworker/archive', + 'archive_upload': None, + 'archive_upload_key': None, 'automated_scheduling': False, 'reserve_machines': 5, 'ceph_git_base_url': 'https://github.com/ceph/', @@ -145,6 +147,20 @@ class TeuthologyConfig(YamlConfig): 'koji_task_url': 'https://kojipkgs.fedoraproject.org/work/', 'baseurl_template': 'http://{host}/{proj}-{pkg_type}-{dist}-{arch}-{flavor}/{uri}', 'teuthology_path': None, + 'openstack': { + 'clone': 'git clone http://github.com/ceph/teuthology', + 'user-data': 'teuthology/openstack/openstack-{os_type}-{os_version}-user-data.txt', + 'ip': '1.1.1.1', + 'machine': { + 'disk': 20, + 'ram': 8000, + 'cpus': 1, + }, + 'volumes': { + 'count': 3, + 'size': 10, + }, + }, } def __init__(self, yaml_path=None): diff --git a/teuthology/lock.py b/teuthology/lock.py index 312c955182..ae8cd3dbf3 100644 --- a/teuthology/lock.py +++ b/teuthology/lock.py @@ -369,6 +369,22 @@ def main(ctx): return ret +def lock_many_openstack(ctx, num, machine_type, user=None, description=None, + arch=None): + os_type = provision.get_distro(ctx) + os_version = provision.get_distro_version(ctx) + if hasattr(ctx, 'config'): + resources_hint = ctx.config.get('openstack') + else: + resources_hint = None + machines = provision.ProvisionOpenStack().create( + num, os_type, os_version, arch, resources_hint) + result = {} + for machine in machines: + lock_one(machine, user, description) + result[machine] = None # we do not collect ssh host keys yet + return result + def lock_many(ctx, num, machine_type, user=None, description=None, os_type=None, os_version=None, arch=None): if user is None: @@ -385,6 +401,11 @@ def lock_many(ctx, num, machine_type, user=None, description=None, machine_types_list = misc.get_multi_machine_types(machine_type) if machine_types_list == ['vps']: machine_types = machine_types_list + elif machine_types_list == ['openstack']: + return lock_many_openstack(ctx, num, machine_type, + user=user, + description=description, + arch=arch) elif 'vps' in machine_types_list: machine_types_non_vps = list(machine_types_list) machine_types_non_vps.remove('vps') @@ -488,7 +509,7 @@ def unlock_many(names, user): def unlock_one(ctx, name, user, description=None): name = misc.canonicalize_hostname(name, user=None) if not provision.destroy_if_vm(ctx, name, user, description): - log.error('downburst destroy failed for %s', name) + log.error('destroy failed for %s', name) request = dict(name=name, locked=False, locked_by=user, description=description) uri = os.path.join(config.lock_server, 'nodes', name, 'lock', '') diff --git a/teuthology/nuke.py b/teuthology/nuke.py index 3cdcb6dbbf..8991b1bf4b 100644 --- a/teuthology/nuke.py +++ b/teuthology/nuke.py @@ -14,6 +14,7 @@ from .lock import list_locks from .lock import unlock_one from .lock import find_stale_locks +from .lockstatus import get_status from .misc import config_file from .misc import merge_configs from .misc import get_testdir @@ -488,8 +489,12 @@ def nuke_helper(ctx, should_unlock): (target,) = ctx.config['targets'].keys() host = target.split('@')[-1] shortname = host.split('.')[0] - if should_unlock and 'vpm' in shortname: - return + if should_unlock: + if 'vpm' in shortname: + return + status_info = get_status(host) + if status_info['is_vm'] and status_info['machine_type'] == 'openstack': + return log.debug('shortname: %s' % shortname) log.debug('{ctx}'.format(ctx=ctx)) if (not ctx.noipmi and 'ipmi_user' in ctx.teuthology_config and diff --git a/teuthology/openstack/__init__.py b/teuthology/openstack/__init__.py new file mode 100644 index 0000000000..71439e357e --- /dev/null +++ b/teuthology/openstack/__init__.py @@ -0,0 +1,621 @@ +# +# Copyright (c) 2015 Red Hat, Inc. +# +# Author: Loic Dachary +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in +# all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +# THE SOFTWARE. +# +import json +import logging +import os +import paramiko +import re +import socket +import subprocess +import tempfile +import teuthology + +from teuthology.contextutil import safe_while +from teuthology.config import config as teuth_config +from teuthology.orchestra import connection +from teuthology import misc + +log = logging.getLogger(__name__) + +class OpenStack(object): + + # wget -O debian-8.0.qcow2 http://cdimage.debian.org/cdimage/openstack/current/debian-8.1.0-openstack-amd64.qcow2 + # wget -O ubuntu-12.04.qcow2 https://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-amd64-disk1.img + # wget -O ubuntu-12.04-i386.qcow2 https://cloud-images.ubuntu.com/precise/current/precise-server-cloudimg-i386-disk1.img + # wget -O ubuntu-14.04.qcow2 https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img + # wget -O ubuntu-14.04-i386.qcow2 https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-i386-disk1.img + # wget -O ubuntu-15.04.qcow2 https://cloud-images.ubuntu.com/vivid/current/vivid-server-cloudimg-arm64-disk1.img + # wget -O ubuntu-15.04-i386.qcow2 https://cloud-images.ubuntu.com/vivid/current/vivid-server-cloudimg-i386-disk1.img + # wget -O opensuse-13.2 http://download.opensuse.org/repositories/Cloud:/Images:/openSUSE_13.2/images/openSUSE-13.2-OpenStack-Guest.x86_64.qcow2 + # wget -O centos-7.0.qcow2 http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 + # wget -O centos-6.6.qcow2 http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud.qcow2 + # wget -O fedora-22.qcow2 https://download.fedoraproject.org/pub/fedora/linux/releases/22/Cloud/x86_64/Images/Fedora-Cloud-Base-22-20150521.x86_64.qcow2 + # wget -O fedora-21.qcow2 http://fedora.mirrors.ovh.net/linux/releases/21/Cloud/Images/x86_64/Fedora-Cloud-Base-20141203-21.x86_64.qcow2 + # wget -O fedora-20.qcow2 http://fedora.mirrors.ovh.net/linux/releases/20/Images/x86_64/Fedora-x86_64-20-20131211.1-sda.qcow2 + image2url = { + 'centos-6.5': 'http://cloud.centos.org/centos/6/images/CentOS-6-x86_64-GenericCloud.qcow2', + 'centos-7.0': 'http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-20150628_01.qcow2', + 'ubuntu-14.04': 'https://cloud-images.ubuntu.com/trusty/current/trusty-server-cloudimg-amd64-disk1.img', + } + + def __init__(self): + self.key_filename = None + self.username = 'ubuntu' + self.up_string = "UNKNOWN" + self.teuthology_suite = 'teuthology-suite' + + @staticmethod + def get_value(result, field): + """ + Get the value of a field from a result returned by the openstack command + in json format. + """ + return filter(lambda v: v['Field'] == field, result)[0]['Value'] + + def image_exists(self, image): + """ + Return true if the image exists in OpenStack. + """ + found = misc.sh("openstack image list -f json --property name='" + + self.image_name(image) + "'") + return len(json.loads(found)) > 0 + + def net_id(self, network): + """ + Return the uuid of the network in OpenStack. + """ + r = json.loads(misc.sh("openstack network show -f json " + + network)) + return self.get_value(r, 'id') + + def type_version(self, os_type, os_version): + """ + Return the string used to differentiate os_type and os_version in names. + """ + return os_type + '-' + os_version + + def image_name(self, name): + """ + Return the image name used by teuthology in OpenStack to avoid + conflicts with existing names. + """ + return "teuthology-" + name + + def image_create(self, name): + """ + Upload an image into OpenStack with glance. The image has to be qcow2. + """ + misc.sh("wget -c -O " + name + ".qcow2 " + self.image2url[name]) + misc.sh("glance image-create --property ownedby=teuthology " + + " --disk-format=qcow2 --container-format=bare " + + " --file " + name + ".qcow2 --name " + self.image_name(name)) + + def image(self, os_type, os_version): + """ + Return the image name for the given os_type and os_version. If the image + does not exist it will be created. + """ + name = self.type_version(os_type, os_version) + if not self.image_exists(name): + self.image_create(name) + return self.image_name(name) + + def flavor(self, hint, select): + """ + Return the smallest flavor that satisfies the desired size. + """ + flavors_string = misc.sh("openstack flavor list -f json") + flavors = json.loads(flavors_string) + found = [] + for flavor in flavors: + if select and not re.match(select, flavor['Name']): + continue + if (flavor['RAM'] >= hint['ram'] and + flavor['VCPUs'] >= hint['cpus'] and + flavor['Disk'] >= hint['disk']): + found.append(flavor) + if not found: + raise Exception("openstack flavor list: " + flavors_string + + " does not contain a flavor in which" + + " the desired " + str(hint) + " can fit") + + def sort_flavor(a, b): + return (a['VCPUs'] - b['VCPUs'] or + a['RAM'] - b['RAM'] or + a['Disk'] - b['Disk']) + sorted_flavor = sorted(found, cmp=sort_flavor) + log.debug("sorted flavor = " + str(sorted_flavor)) + return sorted_flavor[0]['Name'] + + def cloud_init_wait(self, name_or_ip): + """ + Wait for cloud-init to complete on the name_or_ip OpenStack instance. + """ + log.debug('cloud_init_wait ' + name_or_ip) + client_args = { + 'user_at_host': '@'.join((self.username, name_or_ip)), + 'timeout': 10, + 'retry': False, + } + if self.key_filename: + log.debug("using key " + self.key_filename) + client_args['key_filename'] = self.key_filename + with safe_while(sleep=2, tries=600, + action="cloud_init_wait " + name_or_ip) as proceed: + success = False + # CentOS 6.6 logs in /var/log/clout-init-output.log + # CentOS 7.0 logs in /var/log/clout-init.log + all_done = ("tail /var/log/cloud-init*.log ; " + + " test -f /tmp/init.out && tail /tmp/init.out ; " + + " grep '" + self.up_string + "' " + + "/var/log/cloud-init*.log") + while proceed(): + try: + client = connection.connect(**client_args) + except paramiko.PasswordRequiredException as e: + raise Exception( + "The private key requires a passphrase.\n" + "Create a new key with:" + " openstack keypair create myself > myself.pem\n" + " chmod 600 myself.pem\n" + "and call teuthology-openstack with the options\n" + " --key-name myself --key-filename myself.pem\n") + except paramiko.AuthenticationException as e: + log.debug('cloud_init_wait AuthenticationException ' + str(e)) + continue + except socket.timeout as e: + log.debug('cloud_init_wait connect socket.timeout ' + str(e)) + continue + except socket.error as e: + log.debug('cloud_init_wait connect socket.error ' + str(e)) + continue + except Exception as e: + if 'Unknown server' not in str(e): + log.exception('cloud_init_wait ' + name_or_ip) + if 'Unknown server' in str(e): + continue + else: + raise e + log.debug('cloud_init_wait ' + all_done) + try: + stdin, stdout, stderr = client.exec_command(all_done) + stdout.channel.settimeout(5) + out = stdout.read() + log.debug('cloud_init_wait stdout ' + all_done + ' ' + out) + except socket.timeout as e: + client.close() + log.debug('cloud_init_wait socket.timeout ' + all_done) + continue + except socket.error as e: + client.close() + log.debug('cloud_init_wait socket.error ' + str(e) + ' ' + all_done) + continue + log.debug('cloud_init_wait stderr ' + all_done + + ' ' + stderr.read()) + if stdout.channel.recv_exit_status() == 0: + success = True + client.close() + if success: + break + return success + + def exists(self, name_or_id): + """ + Return true if the OpenStack name_or_id instance exists, + false otherwise. + """ + servers = json.loads(misc.sh("openstack server list -f json")) + for server in servers: + if (server['ID'] == name_or_id or server['Name'] == name_or_id): + return True + return False + + @staticmethod + def get_addresses(instance_id): + """ + Return the list of IPs associated with instance_id in OpenStack. + """ + with safe_while(sleep=2, tries=30, + action="get ip " + instance_id) as proceed: + while proceed(): + instance = misc.sh("openstack server show -f json " + + instance_id) + addresses = OpenStack.get_value(json.loads(instance), + 'addresses') + found = re.match('.*\d+', addresses) + if found: + return addresses + + def get_ip(self, instance_id, network): + """ + Return the private IP of the OpenStack instance_id. The network, + if not the empty string, disambiguate multiple networks attached + to the instance. + """ + return re.findall(network + '=([\d.]+)', + self.get_addresses(instance_id))[0] + +class TeuthologyOpenStack(OpenStack): + + def __init__(self, args, config, argv): + """ + args is of type argparse.Namespace as returned + when parsing argv and config is the job + configuration. The argv argument can be re-used + to build the arguments list of teuthology-suite. + """ + super(TeuthologyOpenStack, self).__init__() + self.argv = argv + self.args = args + self.config = config + self.up_string = 'teuthology is up and running' + self.user_data = 'teuthology/openstack/openstack-user-data.txt' + + def main(self): + """ + Entry point implementing the teuthology-openstack command. + """ + self.setup_logs() + misc.read_config(self.args) + self.key_filename = self.args.key_filename + self.verify_openstack() + ip = self.setup() + if self.args.suite: + self.run_suite() + if self.args.key_filename: + identity = '-i ' + self.args.key_filename + ' ' + else: + identity = '' + if self.args.upload: + upload = 'upload to : ' + self.args.archive_upload + else: + upload = '' + log.info(""" +web interface: http://{ip}:8081/ +ssh access : ssh {identity}{username}@{ip} # logs in /usr/share/nginx/html +{upload}""".format(ip=ip, + username=self.username, + identity=identity, + upload=upload)) + if self.args.teardown: + self.teardown() + + def run_suite(self): + """ + Delegate running teuthology-suite to the OpenStack instance + running the teuthology cluster. + """ + original_argv = self.argv[:] + argv = [] + while len(original_argv) > 0: + if original_argv[0] in ('--name', + '--archive-upload', + '--key-name', + '--key-filename', + '--simultaneous-jobs'): + del original_argv[0:2] + elif original_argv[0] in ('--teardown', + '--upload'): + del original_argv[0] + else: + argv.append(original_argv.pop(0)) + argv.append('/home/' + self.username + + '/teuthology/teuthology/openstack/test/openstack.yaml') + command = ( + "source ~/.bashrc_teuthology ; " + self.teuthology_suite + " " + + " --machine-type openstack " + + " ".join(map(lambda x: "'" + x + "'", argv)) + ) + print self.ssh(command) + + def setup(self): + """ + Create the teuthology cluster if it does not already exists + and return its IP address. + """ + if not self.cluster_exists(): + self.create_security_group() + self.create_cluster() + instance_id = self.get_instance_id(self.args.name) + return self.get_floating_ip_or_ip(instance_id) + + def setup_logs(self): + """ + Setup the log level according to --verbose + """ + loglevel = logging.INFO + if self.args.verbose: + loglevel = logging.DEBUG + logging.getLogger("paramiko.transport").setLevel(logging.DEBUG) + teuthology.log.setLevel(loglevel) + + def ssh(self, command): + """ + Run a command in the OpenStack instance of the teuthology cluster. + Return the stdout / stderr of the command. + """ + instance_id = self.get_instance_id(self.args.name) + ip = self.get_floating_ip_or_ip(instance_id) + client_args = { + 'user_at_host': '@'.join((self.username, ip)), + 'retry': False, + } + if self.key_filename: + log.debug("ssh overriding key with " + self.key_filename) + client_args['key_filename'] = self.key_filename + client = connection.connect(**client_args) + stdin, stdout, stderr = client.exec_command(command) + stdout.channel.settimeout(300) + out = '' + try: + out = stdout.read() + log.debug('teardown stdout ' + command + ' ' + out) + except Exception: + log.exception('teardown ' + command + ' failed') + err = stderr.read() + log.debug('teardown stderr ' + command + ' ' + err) + return out + ' ' + err + + def verify_openstack(self): + """ + Check there is a working connection to an OpenStack cluster + and set the provider data member if it is among those we + know already. + """ + try: + misc.sh("openstack server list") + except subprocess.CalledProcessError: + log.exception("openstack server list") + raise Exception("verify openrc.sh has been sourced") + if 'OS_AUTH_URL' not in os.environ: + raise Exception('no OS_AUTH_URL environment variable') + providers = (('cloud.ovh.net', 'ovh'), + ('entercloudsuite.com', 'entercloudsuite')) + self.provider = None + for (pattern, provider) in providers: + if pattern in os.environ['OS_AUTH_URL']: + self.provider = provider + break + + def flavor(self): + """ + Return an OpenStack flavor fit to run the teuthology cluster. + The RAM size depends on the maximum number of workers that + will run simultaneously. + """ + hint = { + 'disk': 10, # GB + 'ram': 1024, # MB + 'cpus': 1, + } + if self.args.simultaneous_jobs > 25: + hint['ram'] = 30000 # MB + elif self.args.simultaneous_jobs > 10: + hint['ram'] = 7000 # MB + elif self.args.simultaneous_jobs > 3: + hint['ram'] = 4000 # MB + + select = None + if self.provider == 'ovh': + select = '^(vps|eg)-' + return super(TeuthologyOpenStack, self).flavor(hint, select) + + def net(self): + """ + Return the network to be used when creating an OpenStack instance. + By default it should not be set. But some providers such as + entercloudsuite require it is. + """ + if self.provider == 'entercloudsuite': + return "--nic net-id=default" + else: + return "" + + def get_user_data(self): + """ + Create a user-data.txt file to be used to spawn the teuthology + cluster, based on a template where the OpenStack credentials + and a few other values are substituted. + """ + path = tempfile.mktemp() + template = open(self.user_data).read() + openrc = '' + for (var, value) in os.environ.iteritems(): + if var.startswith('OS_'): + openrc += ' ' + var + '=' + value + if self.args.upload: + upload = '--archive-upload ' + self.args.archive_upload + else: + upload = '' + clone = teuth_config.openstack['clone'] + log.debug("OPENRC = " + openrc + " " + + "TEUTHOLOGY_USERNAME = " + self.username + " " + + "CLONE_OPENSTACK = " + clone + " " + + "UPLOAD = " + upload + " " + + "NWORKERS = " + str(self.args.simultaneous_jobs)) + content = (template. + replace('OPENRC', openrc). + replace('TEUTHOLOGY_USERNAME', self.username). + replace('CLONE_OPENSTACK', clone). + replace('UPLOAD', upload). + replace('NWORKERS', str(self.args.simultaneous_jobs))) + open(path, 'w').write(content) + log.debug("get_user_data: " + content + " written to " + path) + return path + + def create_security_group(self): + """ + Create a security group that will be used by all teuthology + created instances. This should not be necessary in most cases + but some OpenStack providers enforce firewall restrictions even + among instances created within the same tenant. + """ + try: + misc.sh("openstack security group show teuthology") + return + except subprocess.CalledProcessError: + pass + # TODO(loic): this leaves the teuthology vm very exposed + # it would be better to be very liberal for 192.168.0.0/16 + # and 172.16.0.0/12 and 10.0.0.0/8 and only allow 80/8081/22 + # for the rest. + misc.sh(""" +openstack security group create teuthology +openstack security group rule create --dst-port 1:10000 teuthology +openstack security group rule create --proto udp --dst-port 53 teuthology # dns + """) + + @staticmethod + def get_unassociated_floating_ip(): + """ + Return a floating IP address not associated with an instance or None. + """ + ips = json.loads(misc.sh("openstack ip floating list -f json")) + for ip in ips: + if not ip['Instance ID']: + return ip['IP'] + return None + + @staticmethod + def create_floating_ip(): + pools = json.loads(misc.sh("openstack ip floating pool list -f json")) + if not pools: + return None + pool = pools[0]['Name'] + try: + ip = json.loads(misc.sh( + "openstack ip floating create -f json '" + pool + "'")) + return TeuthologyOpenStack.get_value(ip, 'ip') + except subprocess.CalledProcessError: + log.debug("create_floating_ip: not creating a floating ip") + pass + return None + + @staticmethod + def associate_floating_ip(name_or_id): + """ + Associate a floating IP to the OpenStack instance + or do nothing if no floating ip can be created. + """ + ip = TeuthologyOpenStack.get_unassociated_floating_ip() + if not ip: + ip = TeuthologyOpenStack.create_floating_ip() + if ip: + misc.sh("openstack ip floating add " + ip + " " + name_or_id) + + @staticmethod + def get_floating_ip(instance_id): + """ + Return the floating IP of the OpenStack instance_id. + """ + ips = json.loads(misc.sh("openstack ip floating list -f json")) + for ip in ips: + if ip['Instance ID'] == instance_id: + return ip['IP'] + return None + + @staticmethod + def get_floating_ip_id(ip): + """ + Return the id of a floating IP + """ + results = json.loads(misc.sh("openstack ip floating list -f json")) + for result in results: + if result['IP'] == ip: + return str(result['ID']) + return None + + @staticmethod + def get_floating_ip_or_ip(instance_id): + """ + Return the floating ip, if any, otherwise return the last + IP displayed with openstack server list. + """ + ip = TeuthologyOpenStack.get_floating_ip(instance_id) + if not ip: + ip = re.findall('([\d.]+)$', + TeuthologyOpenStack.get_addresses(instance_id))[0] + return ip + + @staticmethod + def get_instance_id(name): + instance = json.loads(misc.sh("openstack server show -f json " + name)) + return TeuthologyOpenStack.get_value(instance, 'id') + + @staticmethod + def delete_floating_ip(instance_id): + """ + Remove the floating ip from instance_id and delete it. + """ + ip = TeuthologyOpenStack.get_floating_ip(instance_id) + if not ip: + return + misc.sh("openstack ip floating remove " + ip + " " + instance_id) + ip_id = TeuthologyOpenStack.get_floating_ip_id(ip) + misc.sh("openstack ip floating delete " + ip_id) + + def create_cluster(self): + """ + Create an OpenStack instance that runs the teuthology cluster + and wait for it to come up. + """ + user_data = self.get_user_data() + instance = misc.sh( + "openstack server create " + + " --image '" + self.image('ubuntu', '14.04') + "' " + + " --flavor '" + self.flavor() + "' " + + " " + self.net() + + " --key-name " + self.args.key_name + + " --user-data " + user_data + + " --security-group teuthology" + + " --wait " + self.args.name + + " -f json") + instance_id = self.get_value(json.loads(instance), 'id') + os.unlink(user_data) + self.associate_floating_ip(instance_id) + ip = self.get_floating_ip_or_ip(instance_id) + return self.cloud_init_wait(ip) + + def cluster_exists(self): + """ + Return true if there exists an instance running the teuthology cluster. + """ + if not self.exists(self.args.name): + return False + instance_id = self.get_instance_id(self.args.name) + ip = self.get_floating_ip_or_ip(instance_id) + return self.cloud_init_wait(ip) + + def teardown(self): + """ + Delete all instances run by the teuthology cluster and delete the + instance running the teuthology cluster. + """ + self.ssh("sudo /etc/init.d/teuthology stop || true") + instance_id = self.get_instance_id(self.args.name) + self.delete_floating_ip(instance_id) + misc.sh("openstack server delete --wait " + self.args.name) + +def main(ctx, argv): + return TeuthologyOpenStack(ctx, teuth_config, argv).main() diff --git a/teuthology/openstack/archive-key b/teuthology/openstack/archive-key new file mode 100644 index 0000000000..a8861441db --- /dev/null +++ b/teuthology/openstack/archive-key @@ -0,0 +1,27 @@ +-----BEGIN RSA PRIVATE KEY----- +MIIEowIBAAKCAQEAvLz+sao32JL/yMgwTFDTnQVZK3jyXlhQJpHLsgwgHWHQ/27L +fwEbGFVYsJNBGntZwCZvH/K4c0IevbnX/Y69qgmAc9ZpZQLIcIF0A8hmwVYRU+Ap +TAK2qAvadThWfiRBA6+SGoRy6VV5MWeq+hqlGf9axRKqhECNhHuGBuBeosUOZOOH +NVzvFIbp/4842yYrZUDnDzW7JX2kYGi6kaEAYeR8qYJgT/95Pm4Bgu1V7MI36rx1 +O/5BSPF3LvDSnnaZyHCDZtwzC50lBnS2nx8kKPmmdKBSEJoTdNRPIXZ/lMq5pzIW +QPDjI8O5pbX1BJcxfFlZ/h+bI6u8IX3vfTGHWwIDAQABAoIBAG5yLp0rHfkXtKT7 +OQA/wEW/znmZEkPRbD3VzZyIafanuhTv8heFPyTTNM5Hra5ghpniI99PO07/X1vp +OBMCB81MOCYRT6WzpjXoG0rnZ/I1enhZ0fDQGbFnFlTIPh0c/Aq7IEVyQoh24y/d +GXm4Q+tdufFfRfeUivv/CORXQin/Iugbklj8erjx+fdVKPUXilmDIEVleUncer5/ +K5Fxy0lWbm6ZX1fE+rfJvCwNjAaIJgrN8TWUTE8G72F9Y0YU9hRtqOZe6MMbSufy +5+/yj2Vgp+B8Id7Ass2ylDQKsjBett/M2bNKt/DUVIiaxKi0usNSerLvtbkWEw9s +tgUI6ukCgYEA6qqnZwkbgV0lpj1MrQ3BRnFxNR42z2MyEY5xRGaYp22ByxS207z8 +mM3EuLH8k2u6jzsGoPpBWhBbs97MuGDHwsMEO5rBpytnTE4Hxrgec/13Arzk4Bme +eqg1Ji+lNkoLzEHkuihskcZwnQ8uaOdqrnH/NRGuUhA9hjeh+lQzBy8CgYEAzeV1 +zYsw8xIBFtbmFhBQ8imHr0SQalTiQU2Qn46LORK0worsf4sZV5ZF3VBRdnCUwwbm +0XaMb3kE2UBlU8qPqLgxXPNjcEKuqtVlp76dT/lrXIhYUq+Famrf20Lm01kC5itz +QF247hnUfo2uzxpatuEr2ggs2NjuODn57tVw95UCgYEAv0s+C5AxC9OSzWFLEAcW +dwYi8toedBC4z/b9/nRkHJf4JkRMhW6ZuzaCFs2Ax+wZuIi1bqSSgYi0OHx3BhZe +wTWYTb5p/owzONCjJisRKByG14SETuqTdgmIyggs9YSG+Yr9mYM6fdr2EhI+EuYS +4QGsuOYg5GS4wqC3OglJT6ECgYA8y28QRPQsIXnO259OjnzINDkLKGyX6P5xl8yH +QFidfod/FfQk6NaPxSBV67xSA4X5XBVVbfKji5FB8MC6kAoBIHn63ybSY+4dJSuB +70eV8KihxuSFbawwMuRsYoGzkAnKGrRKIiJTs67Ju14NatO0QiJnm5haYxtb4MqK +md1kTQKBgDmTxtSBVOV8eMhl076OoOvdnpb3sy/obI/XUvurS0CaAcqmkVSNJ6c+ +g1O041ocTbuW5d3fbzo9Jyle6qsvUQd7fuoUfAMrd0inKsuYPPM0IZOExbt8QqLI +KFJ+r/nQYoJkmiNO8PssxcP3CMFB6TpUx0BgFcrhH//TtKKNrGTl +-----END RSA PRIVATE KEY----- diff --git a/teuthology/openstack/archive-key.pub b/teuthology/openstack/archive-key.pub new file mode 100644 index 0000000000..57513806d4 --- /dev/null +++ b/teuthology/openstack/archive-key.pub @@ -0,0 +1 @@ +ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC8vP6xqjfYkv/IyDBMUNOdBVkrePJeWFAmkcuyDCAdYdD/bst/ARsYVViwk0Eae1nAJm8f8rhzQh69udf9jr2qCYBz1mllAshwgXQDyGbBVhFT4ClMAraoC9p1OFZ+JEEDr5IahHLpVXkxZ6r6GqUZ/1rFEqqEQI2Ee4YG4F6ixQ5k44c1XO8Uhun/jzjbJitlQOcPNbslfaRgaLqRoQBh5HypgmBP/3k+bgGC7VXswjfqvHU7/kFI8Xcu8NKedpnIcINm3DMLnSUGdLafHyQo+aZ0oFIQmhN01E8hdn+UyrmnMhZA8OMjw7mltfUElzF8WVn+H5sjq7whfe99MYdb loic@fold diff --git a/teuthology/openstack/openstack-centos-6.5-user-data.txt b/teuthology/openstack/openstack-centos-6.5-user-data.txt new file mode 100644 index 0000000000..76a637b112 --- /dev/null +++ b/teuthology/openstack/openstack-centos-6.5-user-data.txt @@ -0,0 +1,27 @@ +#cloud-config +bootcmd: + - echo nameserver {nameserver} | tee /etc/resolv.conf + - echo search {lab_domain} | tee -a /etc/resolv.conf + - sed -ie 's/PEERDNS="yes"/PEERDNS="no"/' /etc/sysconfig/network-scripts/ifcfg-eth0 + - ( curl --silent http://169.254.169.254/2009-04-04/meta-data/hostname | sed -e 's/[\.-].*//' ; eval printf "%03d%03d.{lab_domain}" $(curl --silent http://169.254.169.254/2009-04-04/meta-data/local-ipv4 | sed -e 's/.*\.\(.*\)\.\(.*\)/\1 \2/') ) | tee /etc/hostname + - hostname $(cat /etc/hostname) + - yum install -y yum-utils && yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/6/x86_64/ && yum install --nogpgcheck -y epel-release && rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6 && rm /etc/yum.repos.d/dl.fedoraproject.org* + - ( echo ; echo "MaxSessions 1000" ) >> /etc/ssh/sshd_config + - ( echo 'Defaults !requiretty' ; echo 'Defaults visiblepw' ) | tee /etc/sudoers.d/cephlab_sudo +preserve_hostname: true +system_info: + default_user: + name: {username} +packages: + - python + - wget + - git + - ntp + - dracut-modules-growroot +runcmd: + - mkinitrd --force /boot/initramfs-2.6.32-504.1.3.el6.x86_64.img 2.6.32-504.1.3.el6.x86_64 + - reboot +#runcmd: +# # if /mnt is on ephemeral, that moves /home/{username} on the ephemeral, otherwise it does nothing +# - rsync -a --numeric-ids /home/{username}/ /mnt/ && rm -fr /home/{username} && ln -s /mnt /home/{username} +final_message: "{up}, after $UPTIME seconds" diff --git a/teuthology/openstack/openstack-centos-7.0-user-data.txt b/teuthology/openstack/openstack-centos-7.0-user-data.txt new file mode 100644 index 0000000000..abba277cf1 --- /dev/null +++ b/teuthology/openstack/openstack-centos-7.0-user-data.txt @@ -0,0 +1,25 @@ +#cloud-config +bootcmd: + - echo nameserver {nameserver} | tee /etc/resolv.conf + - echo search {lab_domain} | tee -a /etc/resolv.conf + - sed -ie 's/PEERDNS="yes"/PEERDNS="no"/' /etc/sysconfig/network-scripts/ifcfg-eth0 + - ( curl --silent http://169.254.169.254/2009-04-04/meta-data/hostname | sed -e 's/[\.-].*//' ; eval printf "%03d%03d.{lab_domain}" $(curl --silent http://169.254.169.254/2009-04-04/meta-data/local-ipv4 | sed -e 's/.*\.\(.*\)\.\(.*\)/\1 \2/') ) | tee /etc/hostname + - hostname $(cat /etc/hostname) + - ( echo ; echo "MaxSessions 1000" ) >> /etc/ssh/sshd_config +# See https://github.com/ceph/ceph-cm-ansible/blob/master/roles/cobbler/templates/snippets/cephlab_user + - ( echo 'Defaults !requiretty' ; echo 'Defaults visiblepw' ) | tee /etc/sudoers.d/cephlab_sudo ; chmod 0440 /etc/sudoers.d/cephlab_sudo +preserve_hostname: true +system_info: + default_user: + name: {username} +packages: + - python + - wget + - git + - ntp + - redhat-lsb-core +# this does not work on centos, ssh key will not be working, maybe because there is a symlink to reach it ? +#runcmd: +# # if /mnt is on ephemeral, that moves /home/{username} on the ephemeral, otherwise it does nothing +# - rsync -a --numeric-ids /home/{username}/ /mnt/ && rm -fr /home/{username} && ln -s /mnt /home/{username} +final_message: "{up}, after $UPTIME seconds" diff --git a/teuthology/openstack/openstack-debian-7.0-user-data.txt b/teuthology/openstack/openstack-debian-7.0-user-data.txt new file mode 120000 index 0000000000..a51b0cc9bc --- /dev/null +++ b/teuthology/openstack/openstack-debian-7.0-user-data.txt @@ -0,0 +1 @@ +openstack-ubuntu-user-data.txt \ No newline at end of file diff --git a/teuthology/openstack/openstack-opensuse-user-data.txt b/teuthology/openstack/openstack-opensuse-user-data.txt new file mode 100644 index 0000000000..4071354cf6 --- /dev/null +++ b/teuthology/openstack/openstack-opensuse-user-data.txt @@ -0,0 +1,13 @@ +#cloud-config +users: + - name: clouduser + gecos: User + sudo: ["ALL=(ALL) NOPASSWD:ALL"] + groups: users + ssh_pwauth: True +chpasswd: + list: | + clouduser:linux + expire: False +ssh_pwauth: True + diff --git a/teuthology/openstack/openstack-teuthology.init b/teuthology/openstack/openstack-teuthology.init new file mode 100755 index 0000000000..f99c4b4d91 --- /dev/null +++ b/teuthology/openstack/openstack-teuthology.init @@ -0,0 +1,82 @@ +#!/bin/bash +# +# Copyright (c) 2015 Red Hat, Inc. +# +# Author: Loic Dachary +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in +# all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +# THE SOFTWARE. +# +### BEGIN INIT INFO +# Provides: teuthology +# Required-Start: $network $remote_fs $syslog beanstalkd nginx +# Required-Stop: $network $remote_fs $syslog +# Default-Start: 2 3 4 5 +# Default-Stop: +# Short-Description: Start teuthology +### END INIT INFO + +cd /home/ubuntu + +source /etc/default/teuthology + +user=${TEUTHOLOGY_USERNAME:-ubuntu} + +case $1 in + start) + /etc/init.d/beanstalkd start + su - -c "cd /home/$user/paddles ; virtualenv/bin/pecan serve config.py" $user > /var/log/paddles.log 2>&1 & + su - -c "cd /home/$user/pulpito ; virtualenv/bin/python run.py" $user > /var/log/pulpito.log 2>&1 & + sleep 3 + ( + cd /home/$user + source openrc.sh + cd teuthology + . virtualenv/bin/activate + teuthology-lock --list-targets --owner scheduled_$user@teuthology > /tmp/t + if test -s /tmp/t && ! grep -qq 'targets: {}' /tmp/t ; then + teuthology-lock --unlock -t /tmp/t --owner scheduled_$user@teuthology + fi + mkdir -p /tmp/log + chown $user /tmp/log + for i in $(seq 1 $NWORKERS) ; do + su - -c "cd /home/$user ; source openrc.sh ; cd teuthology ; LC_ALL=C virtualenv/bin/teuthology-worker --tube openstack -l /tmp/log --archive-dir /usr/share/nginx/html" $user > /var/log/teuthology.$i 2>&1 & + done + ) + ;; + stop) + pkill -f 'pecan serve' + pkill -f 'python run.py' + pkill -f 'teuthology-worker' + pkill -f 'ansible' + /etc/init.d/beanstalkd stop + source /home/$user/teuthology/virtualenv/bin/activate + source /home/$user/openrc.sh + ip=$(ip a show dev eth0 | sed -n "s:.*inet \(.*\)/.*:\1:p") + openstack server list --long -f json | \ + jq ".[] | select(.Properties | contains(\"ownedby='$ip'\")) | .ID" | \ + while read uuid ; do + eval openstack server delete $uuid + done + ;; + restart) + $0 stop + $0 start + ;; + *) +esac diff --git a/teuthology/openstack/openstack-ubuntu-14.04-user-data.txt b/teuthology/openstack/openstack-ubuntu-14.04-user-data.txt new file mode 120000 index 0000000000..a51b0cc9bc --- /dev/null +++ b/teuthology/openstack/openstack-ubuntu-14.04-user-data.txt @@ -0,0 +1 @@ +openstack-ubuntu-user-data.txt \ No newline at end of file diff --git a/teuthology/openstack/openstack-ubuntu-user-data.txt b/teuthology/openstack/openstack-ubuntu-user-data.txt new file mode 100644 index 0000000000..b7c94fb224 --- /dev/null +++ b/teuthology/openstack/openstack-ubuntu-user-data.txt @@ -0,0 +1,22 @@ +#cloud-config +bootcmd: + - apt-get remove --purge -y resolvconf || true + - echo 'prepend domain-name-servers {nameserver};' | sudo tee -a /etc/dhcp/dhclient.conf + - echo 'supersede domain-name "{lab_domain}";' | sudo tee -a /etc/dhcp/dhclient.conf + - ifdown eth0 ; ifup eth0 + - ( curl --silent http://169.254.169.254/2009-04-04/meta-data/hostname | sed -e 's/[\.-].*//' ; eval printf "%03d%03d.{lab_domain}" $(curl --silent http://169.254.169.254/2009-04-04/meta-data/local-ipv4 | sed -e 's/.*\.\(.*\)\.\(.*\)/\1 \2/') ) | tee /etc/hostname + - hostname $(cat /etc/hostname) + - echo "MaxSessions 1000" >> /etc/ssh/sshd_config +preserve_hostname: true +system_info: + default_user: + name: {username} +packages: + - python + - wget + - git + - ntp +runcmd: + # if /mnt is on ephemeral, that moves /home/{username} on the ephemeral, otherwise it does nothing + - rsync -a --numeric-ids /home/{username}/ /mnt/ && rm -fr /home/{username} && ln -s /mnt /home/{username} +final_message: "{up}, after $UPTIME seconds" diff --git a/teuthology/openstack/openstack-user-data.txt b/teuthology/openstack/openstack-user-data.txt new file mode 100644 index 0000000000..a0d471e023 --- /dev/null +++ b/teuthology/openstack/openstack-user-data.txt @@ -0,0 +1,16 @@ +#cloud-config +bootcmd: + - touch /tmp/init.out +system_info: + default_user: + name: TEUTHOLOGY_USERNAME +packages: + - python-virtualenv + - git + - rsync +runcmd: + - su - -c '(set -x ; CLONE_OPENSTACK && cd teuthology && ./bootstrap install)' TEUTHOLOGY_USERNAME >> /tmp/init.out 2>&1 + - echo 'export OPENRC' | tee /home/TEUTHOLOGY_USERNAME/openrc.sh + - su - -c '(set -x ; source openrc.sh ; cd teuthology ; source virtualenv/bin/activate ; openstack keypair delete teuthology || true ; teuthology/openstack/setup-openstack.sh --nworkers NWORKERS UPLOAD --setup-all)' TEUTHOLOGY_USERNAME >> /tmp/init.out 2>&1 + - /etc/init.d/teuthology restart +final_message: "teuthology is up and running after $UPTIME seconds" diff --git a/teuthology/openstack/setup-openstack.sh b/teuthology/openstack/setup-openstack.sh new file mode 100755 index 0000000000..0aea3302ae --- /dev/null +++ b/teuthology/openstack/setup-openstack.sh @@ -0,0 +1,589 @@ +#!/bin/bash +# +# Copyright (c) 2015 Red Hat, Inc. +# +# Author: Loic Dachary +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in +# all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +# THE SOFTWARE. +# + +# +# Most of this file is intended to be obsoleted by the ansible equivalent +# when they are available (setting up paddles, pulpito, etc.). +# +function create_config() { + local network="$1" + local subnet="$2" + local nameserver="$3" + local labdomain="$4" + local ip="$5" + local flavor_select="$6" + local archive_upload="$7" + + if test "$flavor_select" ; then + flavor_select="flavor-select-regexp: $flavor_select" + fi + + if test "$network" ; then + network="network: $network" + fi + + if test "$archive_upload" ; then + archive_upload="archive_upload: $archive_upload" + fi + + cat > ~/.teuthology.yaml < /dev/null 2>&1 ; then + sudo -u postgres psql -c "CREATE USER paddles with PASSWORD 'paddles';" || return 1 + sudo -u postgres createdb -O paddles paddles || return 1 + fi + ( + cd $paddles_dir || return 1 + git pull --rebase + git clean -ffqdx + sed -e "s|^address.*|address = 'http://localhost'|" \ + -e "s|^job_log_href_templ = 'http://qa-proxy.ceph.com/teuthology|job_log_href_templ = 'http://$public_ip|" \ + -e "/sqlite/d" \ + -e "s|.*'postgresql+psycop.*'|'url': 'postgresql://paddles:paddles@localhost/paddles'|" \ + -e "s/'host': '127.0.0.1'/'host': '0.0.0.0'/" \ + < config.py.in > config.py + virtualenv ./virtualenv + source ./virtualenv/bin/activate + pip install -r requirements.txt + pip install sqlalchemy tzlocal requests netaddr + python setup.py develop + ) + + echo "CONFIGURED the paddles server" +} + +function populate_paddles() { + local subnet=$1 + local labdomain=$2 + + local paddles_dir=$(dirname $0)/../../../paddles + + local url='postgresql://paddles:paddles@localhost/paddles' + + pkill -f 'pecan serve' + + sudo -u postgres dropdb paddles + sudo -u postgres createdb -O paddles paddles + + ( + cd $paddles_dir || return 1 + source virtualenv/bin/activate + pecan populate config.py + + ( + echo "begin transaction;" + subnet_names_and_ips $subnet | while read name ip ; do + echo "insert into nodes (name,machine_type,is_vm,locked,up) values ('${name}.${labdomain}', 'openstack', TRUE, FALSE, TRUE);" + done + echo "commit transaction;" + ) | psql --quiet $url + + setsid pecan serve config.py < /dev/null > /dev/null 2>&1 & + for i in $(seq 1 20) ; do + if curl --silent http://localhost:8080/ > /dev/null 2>&1 ; then + break + else + echo -n . + sleep 5 + fi + done + echo -n ' ' + ) + + echo "RESET the paddles server" +} + +function teardown_pulpito() { + if pkill -f 'python run.py' ; then + echo "SHUTDOWN the pulpito server" + fi +} + +function setup_pulpito() { + local pulpito=http://localhost:8081/ + + local pulpito_dir=$(dirname $0)/../../../pulpito + + if curl --silent $pulpito | grep -q pulpito ; then + echo "OK pulpito is running" + return 0 + fi + + if ! test -d $pulpito_dir ; then + git clone https://github.com/ceph/pulpito.git $pulpito_dir || return 1 + fi + + sudo apt-get -qq install -y nginx + local nginx_conf=/etc/nginx/sites-available/default + if ! grep -qq 'autoindex on' $nginx_conf ; then + sudo perl -pi -e 's|location / {|location / { autoindex on;|' $nginx_conf + sudo /etc/init.d/nginx restart + echo "ADDED autoindex on to nginx configuration" + fi + sudo chown $USER /usr/share/nginx/html + ( + cd $pulpito_dir || return 1 + git pull --rebase + git clean -ffqdx + sed -e "s|paddles_address.*|paddles_address = 'http://localhost:8080'|" < config.py.in > prod.py + virtualenv ./virtualenv + source ./virtualenv/bin/activate + pip install -r requirements.txt + python run.py & + ) + + echo "LAUNCHED the pulpito server" +} + +function setup_bashrc() { + if test -f ~/.bashrc && grep -qq '.bashrc_teuthology' ~/.bashrc ; then + echo "OK .bashrc_teuthology found in ~/.bashrc" + else + cat > ~/.bashrc_teuthology <<'EOF' +source $HOME/openrc.sh +source $HOME/teuthology/virtualenv/bin/activate +export HISTSIZE=500000 +export PROMPT_COMMAND='history -a' +EOF + echo 'source $HOME/.bashrc_teuthology' >> ~/.bashrc + echo "ADDED .bashrc_teuthology to ~/.bashrc" + fi +} + +function setup_ssh_config() { + if test -f ~/.ssh/config && grep -qq 'StrictHostKeyChecking no' ~/.ssh/config ; then + echo "OK ~/.ssh/config" + else + cat >> ~/.ssh/config <> ~/.ssh/authorized_keys + chmod 600 teuthology/openstack/archive-key + echo "APPEND to ~/.ssh/authorized_keys" +} + +function setup_bootscript() { + local nworkers=$1 + + local where=$(dirname $0) + + sudo cp -a $where/openstack-teuthology.init /etc/init.d/teuthology + echo NWORKERS=$1 | sudo tee /etc/default/teuthology > /dev/null + echo "CREATED init script /etc/init.d/teuthology" +} + +function get_or_create_keypair() { + local keypair=$1 + local key_file=$HOME/.ssh/id_rsa + + if ! openstack keypair show $keypair > /dev/null 2>&1 ; then + if test -f $key_file ; then + if ! test -f $key_file.pub ; then + ssh-keygen -y -f $key_file > $key_file.pub || return 1 + fi + openstack keypair create --public-key $key_file.pub $keypair || return 1 + echo "IMPORTED keypair $keypair" + else + openstack keypair create $keypair > $key_file || return 1 + chmod 600 $key_file + echo "CREATED keypair $keypair" + fi + else + echo "OK keypair $keypair exists" + fi +} + +function delete_keypair() { + local keypair=$1 + + if openstack keypair show $keypair > /dev/null 2>&1 ; then + openstack keypair delete $keypair || return 1 + echo "REMOVED keypair $keypair" + fi +} + +function setup_dnsmasq() { + + if ! test -f /etc/dnsmasq.d/resolv ; then + resolver=$(grep nameserver /etc/resolv.conf | head -1 | perl -ne 'print $1 if(/\s*nameserver\s+([\d\.]+)/)') + sudo apt-get -qq install -y dnsmasq resolvconf + echo resolv-file=/etc/dnsmasq-resolv.conf | sudo tee /etc/dnsmasq.d/resolv + echo nameserver $resolver | sudo tee /etc/dnsmasq-resolv.conf + sudo /etc/init.d/dnsmasq restart + sudo sed -ie 's/^#IGNORE_RESOLVCONF=yes/IGNORE_RESOLVCONF=yes/' /etc/default/dnsmasq + echo nameserver 127.0.0.1 | sudo tee /etc/resolvconf/resolv.conf.d/head + sudo resolvconf -u + # see http://tracker.ceph.com/issues/12212 apt-mirror.front.sepia.ceph.com is not publicly accessible + echo host-record=apt-mirror.front.sepia.ceph.com,64.90.32.37 | sudo tee /etc/dnsmasq.d/apt-mirror + echo "INSTALLED dnsmasq and configured to be a resolver" + else + echo "OK dnsmasq installed" + fi +} + +function subnet_names_and_ips() { + local subnet=$1 + python -c 'import netaddr; print "\n".join([str(i) for i in netaddr.IPNetwork("'$subnet'")])' | + sed -e 's/\./ /g' | while read a b c d ; do + printf "target%03d%03d " $c $d + echo $a.$b.$c.$d + done +} + +function define_dnsmasq() { + local subnet=$1 + local labdomain=$2 + local host_records=/etc/dnsmasq.d/teuthology + if ! test -f $host_records ; then + subnet_names_and_ips $subnet | while read name ip ; do + echo host-record=$name.$labdomain,$ip + done | sudo tee $host_records > /tmp/dnsmasq + head -2 /tmp/dnsmasq + echo 'etc.' + sudo /etc/init.d/dnsmasq restart + echo "CREATED $host_records" + else + echo "OK $host_records exists" + fi +} + +function undefine_dnsmasq() { + local host_records=/etc/dnsmasq.d/teuthology + + sudo rm -f $host_records + echo "REMOVED $host_records" +} + +function setup_ansible() { + local subnet=$1 + local labdomain=$2 + local dir=/etc/ansible/hosts + if ! test -f $dir/teuthology ; then + sudo mkdir -p $dir/group_vars + echo '[testnodes]' | sudo tee $dir/teuthology + subnet_names_and_ips $subnet | while read name ip ; do + echo $name.$labdomain + done | sudo tee -a $dir/teuthology > /tmp/ansible + head -2 /tmp/ansible + echo 'etc.' + echo 'modify_fstab: false' | sudo tee $dir/group_vars/all.yml + echo "CREATED $dir/teuthology" + else + echo "OK $dir/teuthology exists" + fi +} + +function teardown_ansible() { + sudo rm -fr /etc/ansible/hosts/teuthology +} + +function remove_images() { + glance image-list --property-filter ownedby=teuthology | grep -v -e ---- -e 'Disk Format' | cut -f4 -d ' ' | while read image ; do + echo "DELETED iamge $image" + glance image-delete $image + done +} + +function install_packages() { + + if ! test -f /etc/apt/sources.list.d/trusty-backports.list ; then + echo deb http://archive.ubuntu.com/ubuntu trusty-backports main universe | sudo tee /etc/apt/sources.list.d/trusty-backports.list + sudo apt-get update + fi + + local packages="jq realpath" + sudo apt-get -qq install -y $packages + + echo "INSTALL required packages $packages" +} + +CAT=${CAT:-cat} + +function set_nameserver() { + local subnet_id=$1 + local nameserver=$2 + + eval local current_nameserver=$(neutron subnet-show -f json $subnet_id | jq '.[] | select(.Field == "dns_nameservers") | .Value' ) + + if test "$current_nameserver" = "$nameserver" ; then + echo "OK nameserver is $nameserver" + else + neutron subnet-update --dns-nameserver $nameserver $subnet_id || return 1 + echo "CHANGED nameserver from $current_nameserver to $nameserver" + fi +} + +function verify_openstack() { + if ! openstack server list > /dev/null ; then + echo ERROR: the credentials from ~/openrc.sh are not working >&2 + return 1 + fi + echo "OK $OS_TENANT_NAME can use $OS_AUTH_URL" >&2 + local provider + if echo $OS_AUTH_URL | grep -qq cloud.ovh.net ; then + provider=ovh + elif echo $OS_AUTH_URL | grep -qq entercloudsuite.com ; then + provider=entercloudsuite + else + provider=standardopenstack + fi + echo "OPENSTACK PROVIDER $provider" >&2 + echo $provider +} + +function main() { + local network + local subnet + local nameserver + local labdomain=teuthology + local nworkers=2 + local flavor_select + local keypair=teuthology + local archive_upload + + local do_setup_keypair=false + local do_create_config=false + local do_setup_dnsmasq=false + local do_install_packages=false + local do_setup_paddles=false + local do_populate_paddles=false + local do_setup_pulpito=false + local do_clobber=false + + export LC_ALL=C + + while [ $# -ge 1 ]; do + case $1 in + --verbose) + set -x + PS4='${FUNCNAME[0]}: $LINENO: ' + ;; + --nameserver) + shift + nameserver=$1 + ;; + --subnet) + shift + subnet=$1 + ;; + --labdomain) + shift + labdomain=$1 + ;; + --nworkers) + shift + nworkers=$1 + ;; + --archive-upload) + shift + archive_upload=$1 + ;; + --install) + do_install_packages=true + ;; + --config) + do_create_config=true + ;; + --setup-keypair) + do_setup_keypair=true + ;; + --setup-dnsmasq) + do_setup_dnsmasq=true + ;; + --setup-paddles) + do_setup_paddles=true + ;; + --setup-pulpito) + do_setup_pulpito=true + ;; + --populate-paddles) + do_populate_paddles=true + ;; + --setup-all) + do_install_packages=true + do_create_config=true + do_setup_keypair=true + do_setup_dnsmasq=true + do_setup_paddles=true + do_setup_pulpito=true + do_populate_paddles=true + ;; + --clobber) + do_clobber=true + ;; + *) + echo $1 is not a known option + return 1 + ;; + esac + shift + done + + if $do_install_packages ; then + install_packages || return 1 + fi + + local provider=$(verify_openstack) + + eval local default_subnet=$(neutron subnet-list -f json | jq '.[0].cidr') + if test -z "$default_subnet" ; then + default_subnet=$(nova tenant-network-list | grep / | cut -f6 -d' ' | head -1) + fi + : ${subnet:=$default_subnet} + + case $provider in + entercloudsuite) + eval local network=$(neutron net-list -f json | jq '.[] | select(.subnets | contains("'$subnet'")) | .name') + ;; + esac + + case $provider in + ovh) + flavor_select='^(vps|eg)-' + ;; + esac + + local ip=$(ip a show dev eth0 | sed -n "s:.*inet \(.*\)/.*:\1:p") + : ${nameserver:=$ip} + + if $do_create_config ; then + create_config "$network" "$subnet" "$nameserver" "$labdomain" "$ip" "$flavor_select" "$archive_upload" || return 1 + setup_ansible $subnet $labdomain || return 1 + setup_ssh_config || return 1 + setup_authorized_keys || return 1 + setup_bashrc || return 1 + setup_bootscript $nworkers || return 1 + fi + + if $do_setup_keypair ; then + get_or_create_keypair $keypair || return 1 + fi + + if $do_setup_dnsmasq ; then + setup_dnsmasq || return 1 + define_dnsmasq $subnet $labdomain || return 1 + fi + + if $do_setup_paddles ; then + setup_paddles $ip || return 1 + fi + + if $do_populate_paddles ; then + populate_paddles $subnet $labdomain || return 1 + fi + + if $do_setup_pulpito ; then + setup_pulpito || return 1 + fi + + if $do_clobber ; then + undefine_dnsmasq || return 1 + delete_keypair $keypair || return 1 + teardown_paddles || return 1 + teardown_pulpito || return 1 + teardown_ansible || return 1 + remove_images || return 1 + fi +} + +main "$@" diff --git a/teuthology/openstack/test/__init__.py b/teuthology/openstack/test/__init__.py new file mode 100644 index 0000000000..e69de29bb2 diff --git a/teuthology/openstack/test/archive-on-error.yaml b/teuthology/openstack/test/archive-on-error.yaml new file mode 100644 index 0000000000..f9f5247926 --- /dev/null +++ b/teuthology/openstack/test/archive-on-error.yaml @@ -0,0 +1 @@ +archive-on-error: true diff --git a/teuthology/openstack/test/noop.yaml b/teuthology/openstack/test/noop.yaml new file mode 100644 index 0000000000..6aae7ec906 --- /dev/null +++ b/teuthology/openstack/test/noop.yaml @@ -0,0 +1,12 @@ +stop_worker: true +machine_type: openstack +os_type: ubuntu +os_version: "14.04" +roles: +- - mon.a + - osd.0 +tasks: +- exec: + mon.a: + - echo "Well done !" + diff --git a/teuthology/openstack/test/openstack-integration.py b/teuthology/openstack/test/openstack-integration.py new file mode 100644 index 0000000000..8fe0412959 --- /dev/null +++ b/teuthology/openstack/test/openstack-integration.py @@ -0,0 +1,272 @@ +# +# Copyright (c) 2015 Red Hat, Inc. +# +# Author: Loic Dachary +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in +# all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +# THE SOFTWARE. +# +import argparse +import logging +import json +import os +import subprocess +import tempfile +import shutil + +import teuthology.lock +import teuthology.nuke +import teuthology.misc +import teuthology.schedule +import teuthology.suite +import teuthology.openstack +import scripts.schedule +import scripts.lock +import scripts.suite +from teuthology.config import config as teuth_config + +class Integration(object): + + @classmethod + def setup_class(self): + teuthology.log.setLevel(logging.DEBUG) + teuthology.misc.read_config(argparse.Namespace()) + self.teardown_class() + + @classmethod + def teardown_class(self): + os.system("sudo /etc/init.d/beanstalkd restart") + # if this fails it will not show the error but some weird + # INTERNALERROR> IndexError: list index out of range + # move that to def tearDown for debug and when it works move it + # back in tearDownClass so it is not called on every test + all_instances = teuthology.misc.sh("openstack server list -f json --long") + for instance in json.loads(all_instances): + if 'teuthology=' in instance['Properties']: + teuthology.misc.sh("openstack server delete --wait " + instance['ID']) + teuthology.misc.sh(""" +teuthology/openstack/setup-openstack.sh \ + --populate-paddles + """) + + def setup_worker(self): + self.logs = self.d + "/log" + os.mkdir(self.logs, 0o755) + self.archive = self.d + "/archive" + os.mkdir(self.archive, 0o755) + self.worker_cmd = ("teuthology-worker --tube openstack " + + "-l " + self.logs + " " + "--archive-dir " + self.archive + " ") + logging.info(self.worker_cmd) + self.worker = subprocess.Popen(self.worker_cmd, + stdout=subprocess.PIPE, + stderr=subprocess.PIPE, + shell=True) + + def wait_worker(self): + if not self.worker: + return + + (stdoutdata, stderrdata) = self.worker.communicate() + stdoutdata = stdoutdata.decode('utf-8') + stderrdata = stderrdata.decode('utf-8') + logging.info(self.worker_cmd + ":" + + " stdout " + stdoutdata + + " stderr " + stderrdata + " end ") + assert self.worker.returncode == 0 + self.worker = None + + def get_teuthology_log(self): + # the archive is removed before each test, there must + # be only one run and one job + run = os.listdir(self.archive)[0] + job = os.listdir(os.path.join(self.archive, run))[0] + path = os.path.join(self.archive, run, job, 'teuthology.log') + return open(path, 'r').read() + +class TestSuite(Integration): + + def setup(self): + self.d = tempfile.mkdtemp() + self.setup_worker() + logging.info("TestSuite: done worker") + + def teardown(self): + self.wait_worker() + shutil.rmtree(self.d) + + def test_suite_noop(self): + cwd = os.getcwd() + os.mkdir(self.d + '/upload', 0o755) + upload = 'localhost:' + self.d + '/upload' + args = ['--suite', 'noop', + '--suite-dir', cwd + '/teuthology/openstack/test', + '--machine-type', 'openstack', + '--archive-upload', upload, + '--verbose'] + logging.info("TestSuite:test_suite_noop") + scripts.suite.main(args) + self.wait_worker() + log = self.get_teuthology_log() + assert "teuthology.run:pass" in log + assert "Well done" in log + upload_key = teuth_config.archive_upload_key + if upload_key: + ssh = "RSYNC_RSH='ssh -i " + upload_key + "'" + else: + ssh = '' + assert 'teuthology.log' in teuthology.misc.sh(ssh + " rsync -av " + upload) + + def test_suite_nuke(self): + cwd = os.getcwd() + args = ['--suite', 'nuke', + '--suite-dir', cwd + '/teuthology/openstack/test', + '--machine-type', 'openstack', + '--verbose'] + logging.info("TestSuite:test_suite_nuke") + scripts.suite.main(args) + self.wait_worker() + log = self.get_teuthology_log() + assert "teuthology.run:FAIL" in log + locks = teuthology.lock.list_locks(locked=True) + assert len(locks) == 0 + +class TestSchedule(Integration): + + def setup(self): + self.d = tempfile.mkdtemp() + self.setup_worker() + + def teardown(self): + self.wait_worker() + shutil.rmtree(self.d) + + def test_schedule_stop_worker(self): + job = 'teuthology/openstack/test/stop_worker.yaml' + args = ['--name', 'fake', + '--verbose', + '--owner', 'test@test.com', + '--worker', 'openstack', + job] + scripts.schedule.main(args) + self.wait_worker() + + def test_schedule_noop(self): + job = 'teuthology/openstack/test/noop.yaml' + args = ['--name', 'fake', + '--verbose', + '--owner', 'test@test.com', + '--worker', 'openstack', + job] + scripts.schedule.main(args) + self.wait_worker() + log = self.get_teuthology_log() + assert "teuthology.run:pass" in log + assert "Well done" in log + + def test_schedule_resources_hint(self): + """It is tricky to test resources hint in a provider agnostic way. The + best way seems to ask for at least 1GB of RAM and 10GB + disk. Some providers do not offer a 1GB RAM flavor (OVH for + instance) and the 2GB RAM will be chosen instead. It however + seems unlikely that a 4GB RAM will be chosen because it would + mean such a provider has nothing under that limit and it's a + little too high. + + Since the default when installing is to ask for 7000 MB, we + can reasonably assume that the hint has been taken into + account if the instance has less than 4GB RAM. + """ + try: + teuthology.misc.sh("openstack volume list") + job = 'teuthology/openstack/test/resources_hint.yaml' + has_cinder = True + except subprocess.CalledProcessError: + job = 'teuthology/openstack/test/resources_hint_no_cinder.yaml' + has_cinder = False + args = ['--name', 'fake', + '--verbose', + '--owner', 'test@test.com', + '--worker', 'openstack', + job] + scripts.schedule.main(args) + self.wait_worker() + log = self.get_teuthology_log() + assert "teuthology.run:pass" in log + assert "RAM size ok" in log + if has_cinder: + assert "Disk size ok" in log + +class TestLock(Integration): + + def setup(self): + self.options = ['--verbose', + '--machine-type', 'openstack' ] + + def test_main(self): + args = scripts.lock.parse_args(self.options + ['--lock']) + assert teuthology.lock.main(args) == 0 + + def test_lock_unlock(self): + for image in teuthology.openstack.OpenStack.image2url.keys(): + (os_type, os_version) = image.split('-') + args = scripts.lock.parse_args(self.options + + ['--lock-many', '1', + '--os-type', os_type, + '--os-version', os_version]) + assert teuthology.lock.main(args) == 0 + locks = teuthology.lock.list_locks(locked=True) + assert len(locks) == 1 + args = scripts.lock.parse_args(self.options + + ['--unlock', locks[0]['name']]) + assert teuthology.lock.main(args) == 0 + + def test_list(self, capsys): + args = scripts.lock.parse_args(self.options + ['--list', '--all']) + teuthology.lock.main(args) + out, err = capsys.readouterr() + assert 'machine_type' in out + assert 'openstack' in out + +class TestNuke(Integration): + + def setup(self): + self.options = ['--verbose', + '--machine-type', 'openstack'] + + def test_nuke(self): + image = teuthology.openstack.OpenStack.image2url.keys()[0] + + (os_type, os_version) = image.split('-') + args = scripts.lock.parse_args(self.options + + ['--lock-many', '1', + '--os-type', os_type, + '--os-version', os_version]) + assert teuthology.lock.main(args) == 0 + locks = teuthology.lock.list_locks(locked=True) + logging.info('list_locks = ' + str(locks)) + assert len(locks) == 1 + ctx = argparse.Namespace(name=None, + config={ + 'targets': { locks[0]['name']: None }, + }, + owner=locks[0]['locked_by'], + teuthology_config={}) + teuthology.nuke.nuke(ctx, should_unlock=True) + locks = teuthology.lock.list_locks(locked=True) + assert len(locks) == 0 diff --git a/teuthology/openstack/test/openstack.yaml b/teuthology/openstack/test/openstack.yaml new file mode 100644 index 0000000000..6ae6d877cc --- /dev/null +++ b/teuthology/openstack/test/openstack.yaml @@ -0,0 +1,13 @@ +overrides: + ceph: + conf: + global: + osd heartbeat grace: 100 + # this line to address issue #1017 + mon lease: 15 + mon lease ack timeout: 25 + rgw: + default_idle_timeout: 1200 + s3tests: + idle_timeout: 1200 +archive-on-error: true diff --git a/teuthology/openstack/test/resources_hint.yaml b/teuthology/openstack/test/resources_hint.yaml new file mode 100644 index 0000000000..cb13ec48e7 --- /dev/null +++ b/teuthology/openstack/test/resources_hint.yaml @@ -0,0 +1,25 @@ +stop_worker: true +machine_type: openstack +openstack: + machine: + disk: 10 # GB + ram: 1024 # MB + cpus: 1 + volumes: + count: 1 + size: 2 # GB +os_type: ubuntu +os_version: "14.04" +roles: +- - mon.a + - osd.0 +tasks: +- exec: + mon.a: + - test $(sed -n -e 's/MemTotal.* \([0-9][0-9]*\).*/\1/p' < /proc/meminfo) -lt 4000000 && echo "RAM" "size" "ok" + - cat /proc/meminfo +# wait for the attached volume to show up + - for delay in 1 2 4 8 16 32 64 128 256 512 ; do if test -e /sys/block/vdb/size ; then break ; else sleep $delay ; fi ; done +# 4000000 because 512 bytes sectors + - test $(cat /sys/block/vdb/size) -gt 4000000 && echo "Disk" "size" "ok" + - cat /sys/block/vdb/size diff --git a/teuthology/openstack/test/resources_hint_no_cinder.yaml b/teuthology/openstack/test/resources_hint_no_cinder.yaml new file mode 100644 index 0000000000..5ed2797a7e --- /dev/null +++ b/teuthology/openstack/test/resources_hint_no_cinder.yaml @@ -0,0 +1,20 @@ +stop_worker: true +machine_type: openstack +openstack: + machine: + disk: 10 # GB + ram: 1024 # MB + cpus: 1 + volumes: + count: 0 + size: 2 # GB +os_type: ubuntu +os_version: "14.04" +roles: +- - mon.a + - osd.0 +tasks: +- exec: + mon.a: + - cat /proc/meminfo + - test $(sed -n -e 's/MemTotal.* \([0-9][0-9]*\).*/\1/p' < /proc/meminfo) -lt 4000000 && echo "RAM" "size" "ok" diff --git a/teuthology/openstack/test/stop_worker.yaml b/teuthology/openstack/test/stop_worker.yaml new file mode 100644 index 0000000000..45133bb00a --- /dev/null +++ b/teuthology/openstack/test/stop_worker.yaml @@ -0,0 +1 @@ +stop_worker: true diff --git a/teuthology/openstack/test/suites/noop/+ b/teuthology/openstack/test/suites/noop/+ new file mode 100644 index 0000000000..e69de29bb2 diff --git a/teuthology/openstack/test/suites/noop/noop.yaml b/teuthology/openstack/test/suites/noop/noop.yaml new file mode 100644 index 0000000000..49497c2282 --- /dev/null +++ b/teuthology/openstack/test/suites/noop/noop.yaml @@ -0,0 +1,9 @@ +stop_worker: true +roles: +- - mon.a + - osd.0 +tasks: +- exec: + mon.a: + - echo "Well done !" + diff --git a/teuthology/openstack/test/suites/nuke/+ b/teuthology/openstack/test/suites/nuke/+ new file mode 100644 index 0000000000..e69de29bb2 diff --git a/teuthology/openstack/test/suites/nuke/nuke.yaml b/teuthology/openstack/test/suites/nuke/nuke.yaml new file mode 100644 index 0000000000..9ffd7ac5c9 --- /dev/null +++ b/teuthology/openstack/test/suites/nuke/nuke.yaml @@ -0,0 +1,8 @@ +stop_worker: true +nuke-on-error: true +roles: +- - client.0 +tasks: +- exec: + client.0: + - exit 1 diff --git a/teuthology/openstack/test/test_config.py b/teuthology/openstack/test/test_config.py new file mode 100644 index 0000000000..5fddeedf06 --- /dev/null +++ b/teuthology/openstack/test/test_config.py @@ -0,0 +1,35 @@ +from teuthology.config import config + + +class TestOpenStack(object): + + def setup(self): + self.openstack_config = config['openstack'] + + def test_config_clone(self): + assert 'clone' in self.openstack_config + + def test_config_user_data(self): + os_type = 'rhel' + os_version = '7.0' + template_path = self.openstack_config['user-data'].format( + os_type=os_type, + os_version=os_version) + assert os_type in template_path + assert os_version in template_path + + def test_config_ip(self): + assert 'ip' in self.openstack_config + + def test_config_machine(self): + assert 'machine' in self.openstack_config + machine_config = self.openstack_config['machine'] + assert 'disk' in machine_config + assert 'ram' in machine_config + assert 'cpus' in machine_config + + def test_config_volumes(self): + assert 'volumes' in self.openstack_config + volumes_config = self.openstack_config['volumes'] + assert 'count' in volumes_config + assert 'size' in volumes_config diff --git a/teuthology/openstack/test/test_openstack.py b/teuthology/openstack/test/test_openstack.py new file mode 100644 index 0000000000..c9723b548b --- /dev/null +++ b/teuthology/openstack/test/test_openstack.py @@ -0,0 +1,132 @@ +# +# Copyright (c) 2015 Red Hat, Inc. +# +# Author: Loic Dachary +# +# Permission is hereby granted, free of charge, to any person obtaining a copy +# of this software and associated documentation files (the "Software"), to deal +# in the Software without restriction, including without limitation the rights +# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +# copies of the Software, and to permit persons to whom the Software is +# furnished to do so, subject to the following conditions: +# +# The above copyright notice and this permission notice shall be included in +# all copies or substantial portions of the Software. +# +# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN +# THE SOFTWARE. +# +import argparse +import logging +import os +import pytest +import tempfile + +import teuthology +from teuthology import misc +from teuthology.openstack import TeuthologyOpenStack +import scripts.openstack + +class TestTeuthologyOpenStack(object): + + @classmethod + def setup_class(self): + if 'OS_AUTH_URL' not in os.environ: + pytest.skip('no OS_AUTH_URL environment variable') + + teuthology.log.setLevel(logging.DEBUG) + teuthology.misc.read_config(argparse.Namespace()) + + ip = TeuthologyOpenStack.create_floating_ip() + if ip: + ip_id = TeuthologyOpenStack.get_floating_ip_id(ip) + misc.sh("openstack ip floating delete " + ip_id) + self.can_create_floating_ips = True + else: + self.can_create_floating_ips = False + + def setup(self): + self.key_filename = tempfile.mktemp() + self.key_name = 'teuthology-test' + self.name = 'teuthology-test' + self.clobber() + misc.sh(""" +openstack keypair create {key_name} > {key_filename} +chmod 600 {key_filename} + """.format(key_filename=self.key_filename, + key_name=self.key_name)) + self.options = ['--key-name', self.key_name, + '--key-filename', self.key_filename, + '--name', self.name, + '--verbose'] + + def teardown(self): + self.clobber() + os.unlink(self.key_filename) + + def clobber(self): + misc.sh(""" +openstack server delete {name} --wait || true +openstack keypair delete {key_name} || true + """.format(key_name=self.key_name, + name=self.name)) + + def test_create(self, capsys): + teuthology_argv = [ + '--suite', 'upgrade/hammer', + '--dry-run', + '--ceph', 'master', + '--kernel', 'distro', + '--flavor', 'gcov', + '--distro', 'ubuntu', + '--suite-branch', 'hammer', + '--email', 'loic@dachary.org', + '--num', '10', + '--limit', '23', + '--subset', '1/2', + '--priority', '101', + '--timeout', '234', + '--filter', 'trasher', + '--filter-out', 'erasure-code', + ] + argv = (self.options + + ['--upload', + '--archive-upload', 'user@archive:/tmp'] + + teuthology_argv) + args = scripts.openstack.parse_args(argv) + teuthology = TeuthologyOpenStack(args, None, argv) + teuthology.user_data = 'teuthology/openstack/test/user-data-test1.txt' + teuthology.teuthology_suite = 'echo --' + + teuthology.main() + assert 'Ubuntu 14.04' in teuthology.ssh("lsb_release -a") + variables = teuthology.ssh("grep 'substituded variables' /var/log/cloud-init.log") + assert "nworkers=" + str(args.simultaneous_jobs) in variables + assert "username=" + teuthology.username in variables + assert "upload=--archive-upload user@archive:/tmp" in variables + assert "upload=git clone" in variables + assert os.environ['OS_AUTH_URL'] in variables + + out, err = capsys.readouterr() + assert " ".join(teuthology_argv) in out + + if self.can_create_floating_ips: + ip = teuthology.get_floating_ip(self.name) + teuthology.teardown() + if self.can_create_floating_ips: + assert teuthology.get_floating_ip_id(ip) == None + + def test_floating_ip(self): + if not self.can_create_floating_ips: + pytest.skip('unable to create floating ips') + + expected = TeuthologyOpenStack.create_floating_ip() + ip = TeuthologyOpenStack.get_unassociated_floating_ip() + assert expected == ip + ip_id = TeuthologyOpenStack.get_floating_ip_id(ip) + misc.sh("openstack ip floating delete " + ip_id) diff --git a/teuthology/openstack/test/user-data-test1.txt b/teuthology/openstack/test/user-data-test1.txt new file mode 100644 index 0000000000..9889aa9f35 --- /dev/null +++ b/teuthology/openstack/test/user-data-test1.txt @@ -0,0 +1,5 @@ +#cloud-config +system_info: + default_user: + name: ubuntu +final_message: "teuthology is up and running after $UPTIME seconds, substituded variables nworkers=NWORKERS openrc=OPENRC username=TEUTHOLOGY_USERNAME upload=UPLOAD clone=CLONE_OPENSTACK" diff --git a/teuthology/orchestra/connection.py b/teuthology/orchestra/connection.py index b0c631fee6..cc5f1d8cc3 100644 --- a/teuthology/orchestra/connection.py +++ b/teuthology/orchestra/connection.py @@ -38,7 +38,7 @@ def create_key(keytype, key): def connect(user_at_host, host_key=None, keep_alive=False, timeout=60, - _SSHClient=None, _create_key=None): + _SSHClient=None, _create_key=None, retry=True, key_filename=None): """ ssh connection routine. @@ -48,6 +48,9 @@ def connect(user_at_host, host_key=None, keep_alive=False, timeout=60, :param timeout: timeout in seconds :param _SSHClient: client, default is paramiko ssh client :param _create_key: routine to create a key (defaults to local reate_key) + :param retry: Whether or not to retry failed connection attempts + (eventually giving up if none succeed). Default is True + :param key_filename: Optionally override which private key to use. :return: ssh connection. """ user, host = split_user(user_at_host) @@ -76,6 +79,8 @@ def connect(user_at_host, host_key=None, keep_alive=False, timeout=60, username=user, timeout=timeout ) + if key_filename: + connect_args['key_filename'] = key_filename ssh_config_path = os.path.expanduser("~/.ssh/config") if os.path.exists(ssh_config_path): @@ -83,10 +88,11 @@ def connect(user_at_host, host_key=None, keep_alive=False, timeout=60, ssh_config.parse(open(ssh_config_path)) opts = ssh_config.lookup(host) opts_to_args = { - 'identityfile': 'key_filename', 'host': 'hostname', 'user': 'username' } + if not key_filename: + opts_to_args['identityfile'] = 'key_filename' for opt_name, arg_name in opts_to_args.items(): if opt_name in opts: value = opts[opt_name] @@ -96,13 +102,17 @@ def connect(user_at_host, host_key=None, keep_alive=False, timeout=60, log.info(connect_args) - # just let the exceptions bubble up to caller - with safe_while(sleep=1, action='connect to ' + host) as proceed: - while proceed(): - try: - ssh.connect(**connect_args) - break - except paramiko.AuthenticationException: - log.exception("Error connecting to {host}".format(host=host)) + if not retry: + ssh.connect(**connect_args) + else: + # Retries are implemented using safe_while + with safe_while(sleep=1, action='connect to ' + host) as proceed: + while proceed(): + try: + ssh.connect(**connect_args) + break + except paramiko.AuthenticationException: + log.exception( + "Error connecting to {host}".format(host=host)) ssh.get_transport().set_keepalive(keep_alive) return ssh diff --git a/teuthology/provision.py b/teuthology/provision.py index 516ae03a26..078794375c 100644 --- a/teuthology/provision.py +++ b/teuthology/provision.py @@ -1,9 +1,14 @@ +import json import logging +import misc import os +import random +import re import subprocess import tempfile import yaml +from .openstack import OpenStack from .config import config from .contextutil import safe_while from .misc import decanonicalize_hostname, get_distro, get_distro_version @@ -195,6 +200,170 @@ def __del__(self): self.remove_config() +class ProvisionOpenStack(OpenStack): + """ + A class that provides methods for creating and destroying virtual machine + instances using OpenStack + """ + def __init__(self): + super(ProvisionOpenStack, self).__init__() + self.user_data = tempfile.mktemp() + log.debug("ProvisionOpenStack: " + str(config.openstack)) + self.basename = 'target' + self.up_string = 'The system is finally up' + self.property = "%16x" % random.getrandbits(128) + + def __del__(self): + if os.path.exists(self.user_data): + os.unlink(self.user_data) + + def init_user_data(self, os_type, os_version): + """ + Get the user-data file that is fit for os_type and os_version. + It is responsible for setting up enough for ansible to take + over. + """ + template_path = config['openstack']['user-data'].format( + os_type=os_type, + os_version=os_version) + nameserver = config['openstack'].get('nameserver', '8.8.8.8') + user_data_template = open(template_path).read() + user_data = user_data_template.format( + up=self.up_string, + nameserver=nameserver, + username=self.username, + lab_domain=config.lab_domain) + open(self.user_data, 'w').write(user_data) + + def attach_volumes(self, name, hint): + """ + Create and attach volumes to the named OpenStack instance. + """ + if hint: + volumes = hint['volumes'] + else: + volumes = config['openstack']['volumes'] + for i in range(volumes['count']): + volume_name = name + '-' + str(i) + try: + misc.sh("openstack volume show -f json " + + volume_name) + except subprocess.CalledProcessError as e: + if 'No volume with a name or ID' not in e.output: + raise e + misc.sh("openstack volume create -f json " + + config['openstack'].get('volume-create', '') + " " + + " --size " + str(volumes['size']) + " " + + volume_name) + with safe_while(sleep=2, tries=100, + action="volume " + volume_name) as proceed: + while proceed(): + r = misc.sh("openstack volume show -f json " + + volume_name) + status = self.get_value(json.loads(r), 'status') + if status == 'available': + break + else: + log.info("volume " + volume_name + + " not available yet") + misc.sh("openstack server add volume " + + name + " " + volume_name) + + def list_volumes(self, name_or_id): + """ + Return the uuid of the volumes attached to the name_or_id + OpenStack instance. + """ + instance = misc.sh("openstack server show -f json " + + name_or_id) + volumes = self.get_value(json.loads(instance), + 'os-extended-volumes:volumes_attached') + return [ volume['id'] for volume in volumes ] + + @staticmethod + def ip2name(prefix, ip): + """ + return the instance name suffixed with the /16 part of the IP. + """ + digits = map(int, re.findall('.*\.(\d+)\.(\d+)', ip)[0]) + return prefix + "%03d%03d" % tuple(digits) + + def create(self, num, os_type, os_version, arch, resources_hint): + """ + Create num OpenStack instances running os_type os_version and + return their names. Each instance has at least the resources + described in resources_hint. + """ + log.debug('ProvisionOpenStack:create') + self.init_user_data(os_type, os_version) + image = self.image(os_type, os_version) + if 'network' in config['openstack']: + net = "--nic net-id=" + str(self.net_id(config['openstack']['network'])) + else: + net = '' + if resources_hint: + flavor_hint = resources_hint['machine'] + else: + flavor_hint = config['openstack']['machine'] + flavor = self.flavor(flavor_hint, + config['openstack'].get('flavor-select-regexp')) + misc.sh("openstack server create" + + " " + config['openstack'].get('server-create', '') + + " -f json " + + " --image '" + str(image) + "'" + + " --flavor '" + str(flavor) + "'" + + " --key-name teuthology " + + " --user-data " + str(self.user_data) + + " " + net + + " --min " + str(num) + + " --max " + str(num) + + " --security-group teuthology" + + " --property teuthology=" + self.property + + " --property ownedby=" + config.openstack['ip'] + + " --wait " + + " " + self.basename) + all_instances = json.loads(misc.sh("openstack server list -f json --long")) + instances = filter( + lambda instance: self.property in instance['Properties'], + all_instances) + fqdns = [] + try: + network = config['openstack'].get('network', '') + for instance in instances: + name = self.ip2name(self.basename, self.get_ip(instance['ID'], network)) + misc.sh("openstack server set " + + "--name " + name + " " + + instance['ID']) + fqdn = name + '.' + config.lab_domain + if not misc.ssh_keyscan_wait(fqdn): + raise ValueError('ssh_keyscan_wait failed for ' + fqdn) + import time + time.sleep(15) + if not self.cloud_init_wait(fqdn): + raise ValueError('clound_init_wait failed for ' + fqdn) + self.attach_volumes(name, resources_hint) + fqdns.append(fqdn) + except Exception as e: + log.exception(str(e)) + for id in [instance['ID'] for instance in instances]: + self.destroy(id) + raise e + return fqdns + + def destroy(self, name_or_id): + """ + Delete the name_or_id OpenStack instance. + """ + log.debug('ProvisionOpenStack:destroy ' + name_or_id) + if not self.exists(name_or_id): + return True + volumes = self.list_volumes(name_or_id) + misc.sh("openstack server delete --wait " + name_or_id) + for volume in volumes: + misc.sh("openstack volume delete " + volume) + return True + + def create_if_vm(ctx, machine_name, _downburst=None): """ Use downburst to create a virtual machine @@ -209,6 +378,9 @@ def create_if_vm(ctx, machine_name, _downburst=None): return False os_type = get_distro(ctx) os_version = get_distro_version(ctx) + if status_info.get('machine_type') == 'openstack': + return ProvisionOpenStack(name=machine_name).create( + os_type, os_version) has_config = hasattr(ctx, 'config') and ctx.config is not None if has_config and 'downburst' in ctx.config: @@ -249,6 +421,10 @@ def destroy_if_vm(ctx, machine_name, user=None, description=None, log.error(msg.format(node=machine_name, desc_arg=description, desc_lock=status_info['description'])) return False + if status_info.get('machine_type') == 'openstack': + return ProvisionOpenStack().destroy( + decanonicalize_hostname(machine_name)) + dbrst = _downburst or Downburst(name=machine_name, os_type=None, os_version=None, status=status_info) return dbrst.destroy() diff --git a/teuthology/run.py b/teuthology/run.py index 48a69593e1..430ed80d97 100644 --- a/teuthology/run.py +++ b/teuthology/run.py @@ -204,6 +204,7 @@ def get_initial_tasks(lock, config, machine_type): {'internal.base': None}, {'internal.archive': None}, {'internal.coredump': None}, + {'internal.archive_upload': None}, {'internal.sudo': None}, {'internal.syslog': None}, {'internal.timer': None}, diff --git a/teuthology/suite.py b/teuthology/suite.py index bf2df8a441..80d1dfc4e5 100644 --- a/teuthology/suite.py +++ b/teuthology/suite.py @@ -58,6 +58,9 @@ def main(args): email = args['--email'] if email: config.results_email = email + if args['--archive-upload']: + config.archive_upload = args['--archive-upload'] + log.info('Will upload archives to ' + args['--archive-upload']) timeout = args['--timeout'] filter_in = args['--filter'] filter_out = args['--filter-out'] @@ -244,6 +247,8 @@ def create_initial_config(suite, suite_branch, ceph_branch, teuthology_branch, teuthology_branch=teuthology_branch, machine_type=machine_type, distro=distro, + archive_upload=config.archive_upload, + archive_upload_key=config.archive_upload_key, ) conf_dict = substitute_placeholders(dict_templ, config_input) conf_dict.update(kernel_dict) @@ -1009,6 +1014,8 @@ def _substitute(input_dict, values_dict): 'branch': Placeholder('ceph_branch'), 'sha1': Placeholder('ceph_hash'), 'teuthology_branch': Placeholder('teuthology_branch'), + 'archive_upload': Placeholder('archive_upload'), + 'archive_upload_key': Placeholder('archive_upload_key'), 'machine_type': Placeholder('machine_type'), 'nuke-on-error': True, 'os_type': Placeholder('distro'), diff --git a/teuthology/task/internal.py b/teuthology/task/internal.py index ef7023e409..567621eea6 100644 --- a/teuthology/task/internal.py +++ b/teuthology/task/internal.py @@ -566,6 +566,31 @@ def coredump(ctx, config): 'Found coredumps on {rem}'.format(rem=rem) +@contextlib.contextmanager +def archive_upload(ctx, config): + """ + Upload the archive directory to a designated location + """ + try: + yield + finally: + upload = ctx.config.get('archive_upload') + archive_path = ctx.config.get('archive_path') + if upload and archive_path: + log.info('Uploading archives ...') + upload_key = ctx.config.get('archive_upload_key') + if upload_key: + ssh = "RSYNC_RSH='ssh -i " + upload_key + "'" + else: + ssh = '' + split_path = archive_path.split('/') + split_path.insert(-2, '.') + misc.sh(ssh + " rsync -avz --relative /" + + os.path.join(*split_path) + " " + + upload) + else: + log.info('Not uploading archives.') + @contextlib.contextmanager def syslog(ctx, config): """ diff --git a/teuthology/task/selinux.py b/teuthology/task/selinux.py index 581d398949..5b1f66bdda 100644 --- a/teuthology/task/selinux.py +++ b/teuthology/task/selinux.py @@ -6,6 +6,7 @@ from teuthology.exceptions import SELinuxError from teuthology.misc import get_archive_dir from teuthology.orchestra.cluster import Cluster +from teuthology.lockstatus import get_status from . import Task @@ -33,8 +34,9 @@ def filter_hosts(self): super(SELinux, self).filter_hosts() new_cluster = Cluster() for (remote, roles) in self.cluster.remotes.iteritems(): - if remote.shortname.startswith('vpm'): - msg = "Excluding {host}: downburst VMs are not yet supported" + status_info = get_status(remote.name) + if status_info and status_info.get('is_vm', False): + msg = "Excluding {host}: VMs are not yet supported" log.info(msg.format(host=remote.shortname)) elif remote.os.package_type == 'rpm': new_cluster.add(remote, roles) diff --git a/teuthology/test/task/test_selinux.py b/teuthology/test/task/test_selinux.py index 57748c56f6..9145f31fbd 100644 --- a/teuthology/test/task/test_selinux.py +++ b/teuthology/test/task/test_selinux.py @@ -11,7 +11,9 @@ def setup(self): self.ctx = FakeNamespace() self.ctx.config = dict() - def test_host_exclusion(self): + @patch('teuthology.task.selinux.get_status') + def test_host_exclusion(self, mock_get_status): + mock_get_status.return_value = None with patch.multiple( Remote, os=DEFAULT, diff --git a/teuthology/test/test_suite.py b/teuthology/test/test_suite.py index 1ef535bcd5..0b25b598bd 100644 --- a/teuthology/test/test_suite.py +++ b/teuthology/test/test_suite.py @@ -41,6 +41,8 @@ def test_substitute_placeholders(self): teuthology_branch='teuthology_branch', machine_type='machine_type', distro='distro', + archive_upload='archive_upload', + archive_upload_key='archive_upload_key', ) output_dict = suite.substitute_placeholders(suite.dict_templ, input_dict) @@ -58,6 +60,8 @@ def test_null_placeholders_dropped(self): ceph_hash='ceph_hash', teuthology_branch='teuthology_branch', machine_type='machine_type', + archive_upload='archive_upload', + archive_upload_key='archive_upload_key', distro=None, ) output_dict = suite.substitute_placeholders(suite.dict_templ, diff --git a/tox.ini b/tox.ini index 4f0ef81f6f..0d1d89ccf4 100644 --- a/tox.ini +++ b/tox.ini @@ -1,5 +1,5 @@ [tox] -envlist = docs, py27, py27-integration, flake8 +envlist = docs, py27, py27-integration, flake8, openstack [testenv:py27] install_command = pip install --upgrade {opts} {packages} @@ -13,11 +13,12 @@ deps= pytest-cov==1.6 coverage==3.7.1 -commands=py.test --cov=teuthology --cov-report=term -v {posargs:teuthology scripts} +commands= + py.test --cov=teuthology --cov-report=term -v {posargs:teuthology scripts} [testenv:py27-integration] install_command = pip install --upgrade {opts} {packages} -passenv = HOME +passenv = HOME OS_REGION_NAME OS_AUTH_URL OS_TENANT_ID OS_TENANT_NAME OS_PASSWORD OS_USERNAME sitepackages=True deps= -r{toxinidir}/requirements.txt @@ -44,3 +45,25 @@ deps=sphinx commands= sphinx-apidoc -f -o . ../teuthology ../teuthology/test ../teuthology/orchestra/test ../teuthology/task/test sphinx-build -b html -d {envtmpdir}/doctrees . {envtmpdir}/html + +[testenv:openstack] +install_command = pip install --upgrade {opts} {packages} +passenv = HOME OS_REGION_NAME OS_AUTH_URL OS_TENANT_ID OS_TENANT_NAME OS_PASSWORD OS_USERNAME +sitepackages=True +deps= + -r{toxinidir}/requirements.txt + mock + +commands=py.test -v {posargs:teuthology/openstack/test/test_openstack.py} +basepython=python2.7 + +[testenv:openstack-integration] +passenv = HOME OS_REGION_NAME OS_AUTH_URL OS_TENANT_ID OS_TENANT_NAME OS_PASSWORD OS_USERNAME +basepython=python2 +sitepackages=True +deps= + -r{toxinidir}/requirements.txt + mock + +commands= + py.test -v teuthology/openstack/test/openstack-integration.py