Commits on Jun 30, 2016
  1. @cboylan

    Do not use async with zl restarter playbook

    Running this playbook on the puppetmaster we consistently run into ssh
    failures due to async reconnecting periodically and network issues
    between hosts. We can address this by starting a single connection
    without async and polling on that which appears to be the default
    wait_for behavior. Testing of this seems to indicate it is more
    Change-Id: Iec72e2c0d099c0e28bc4b4b48608a03b3e66b4c0
    cboylan committed Jun 30, 2016
Commits on Jun 24, 2016
  1. @cboylan

    Don't retry console log downloads

    With the switch to zuul launcher instead of jenkins we now get the zmq
    finished event after all logs are done copying (unlike with jenkins
    where the console log could show up later). As a result we don't need to
    continue trying to download console logs until some EOF file string
    shows up, instead we can download what is there and know it is
    Change-Id: I789c073a2fab8863de833684bc64b3e5cb405cf8
    cboylan committed Jun 24, 2016
Commits on May 25, 2016
  1. @cboylan

    Test logstash.o.o on trusty.

    This is in prep for switching over to trusty for the logstash proxy
    Change-Id: I0360e1a5bbe7524697c7b9b62ec1292b3f944bad
    cboylan committed May 25, 2016
Commits on May 12, 2016
  1. @cboylan

    Add BHS1 region to openstackjenkins-ovh all-clouds

    This region is missing from the region list for the openstackjenkins-ovh
    cloud definition in all-clouds.yaml. Add it in as we use that region
    Change-Id: Ie292b2ee50dc57da2a9e8f6997368b116c045326
    cboylan committed May 12, 2016
Commits on May 4, 2016
  1. @cboylan

    Upgrade elasticsearch to 1.7.5

    There have been a couple bug fix release between latest 1.7.x and 1.7.3
    which we are running. Pull in this fixes in hopes that it makes shard
    allocation more reliable.
    Change-Id: I48f46663c73cb178ca3cf95a166b3df2ea121459
    cboylan committed May 4, 2016
Commits on May 2, 2016
  1. @cboylan

    Properly handle volumes in launch node

    We can only get the volume attach device if we are attaching a volume.
    Check if the volume is being attached and only determine the attachment
    location in that case to avoid errors.
    Story: 2000569
    Change-Id: I4adc5e23abdfc0627a0850f845e2333d3bd25e63
    cboylan committed May 2, 2016
Commits on Apr 29, 2016
  1. @cboylan

    Add xenial to debian ceph hammer mirror

    There are xenial packages in the debian ceph hammer repo now. Add them
    to our mirror so that we can transition to xenial with ceph packages
    Change-Id: I815f903e11bad92da2c3587be02fed703cfa2361
    cboylan committed Apr 29, 2016
Commits on Apr 26, 2016
  1. @cboylan @jesusaurus

    Use logstash filters from filters repo

    This uses new logstash conf.d features to properly link in the
    configuration from the logstash-filters repo. This should make
    configuring logstash far more flexible and reconsumable.
    Change-Id: Ia304eb7e73c82ca5ce85967fbf442bb4bc5f8b7a
    Depends-On: Icbca7a6ba0c5a94a273ef158f707311b588483fd
    cboylan committed with jesusaurus Apr 25, 2016
Commits on Apr 19, 2016
  1. @cboylan

    Add support to launch-node for cinder attach

    Now that we have a shade version of the launch node script adding in
    support for attaching a cinder volume is simple. Do this so that
    launching mirrors which rely on cinder volumes is simpler.
    This updates the script to setup the first cinder volume
    with lvm and mount it under the specified path. It will also install
    lvm2 pacakges since they may not be present on all base images.
    This updates the script to avoid blindly using /dev/vdb as
    the location for swap as this may be a cinder volume or config drive.
    We add availability zone, device specification, mount path, and
    fs label support to as these are all necessary
    inputs to properly mount a cinder volume in a VM.
    Change-Id: Ie95fd4bd5fca8df4f8046d43d1333935cad567e3
    cboylan committed Feb 26, 2016
Commits on Apr 15, 2016
  1. @cboylan

    Restrict access to Jenkinses

    Recent security issues with Jenkins have prompted us to remove them from
    the Internet. Sorry folks.
    Change-Id: I99bf3cfbcd92f65376638e00683574252e8bda02
    cboylan committed Apr 15, 2016
  2. @cboylan

    Don't mirror udeb on ceph debian mirror

    The ceph debian mirror does not have udeb components so don't attempt to
    mirror them. Attempting to mirror them makes reprepro fail.
    Change-Id: Ica8e99092d762445af78bb0a7f7f789c8576a1c5
    cboylan committed Apr 15, 2016
Commits on Apr 14, 2016
  1. @cboylan

    Set the debian ceph key id to proper value

    Turns out we got the wrong hash before :( but thankfully reprepro didn't
    funciton in that state. Get the ids correct so reprepro can do its job.
    Change-Id: Ic8e5f3ebfea681b289e1f5381134df4e0485af3d
    cboylan committed Apr 14, 2016
  2. @cboylan

    Change afs path for ceph mirror for readability

    Move the ceph debian hammer mirror to mirror/ceph-deb-hammer from
    mirror/debian-ceph-hammer to make it clear this is a ceph mirror for
    debuntu distros. This isn't a debian specific ceph repo. Also don't use
    a ceph/ directory in order to keep all top level repos at the same
    directory level.
    Change-Id: I5d313d301db4eaeb4267cdd6ce7787cf9c098582
    cboylan committed Apr 14, 2016
  3. @cboylan

    Apt mirror for ceph hammer release

    Mirror the trusty packages for the ceph hammer release to aid in process
    of making ceph testing more robust. Use reprepro which is already in use
    to mirror the main ubuntu trusty repos.
    Change-Id: Ifd09272c7b1e07de9135be5a96be06153a3f611e
    cboylan committed Mar 15, 2016
Commits on Mar 16, 2016
  1. @cboylan

    Make reprepro useable on multiple mirrors

    Ubuntu isn't the only thing we want to mirror, we may also want to
    mirror debian or ceph packages and so on. Refactor the puppet so that it
    is easier to do this in a reconsumable way.
    Change-Id: I0a12bc4cb67339a7566fb113bbbc897d4f112f50
    cboylan committed Mar 15, 2016
Commits on Mar 11, 2016
  1. @cboylan

    Don't install ES on logstash workers

    Now that we are running logstash 2.0 the logstash daemon can talk to
    elasticsearch directly and load balance across the cluster. This means
    we don't need a local elasticsearch daemon to do that for us. The big
    savings here is in memory so stop installing and running elasticsearch
    completely on the workers.
    Note this will not uninstall an existing ES install you will need to
    clean that up if a preexisting install is present.
    Change-Id: I9b622674a74a26e7c3024e684e05291f43aec021
    cboylan committed Mar 11, 2016
Commits on Mar 10, 2016
  1. @cboylan

    Use ruby 1.8 compat erb for logstash config

    The old logstash config was not ruby 1.8 compatible and we got funny
    results out of it. This version should work with ruby 1.8 and beyond.
    Change-Id: Ibe824dda7c96e5b333329ce25f65a14d3ebdef9c
    cboylan committed Mar 10, 2016
  2. @cboylan

    Logstash 2.0 compat ES output rule

    Logstash 2.0 defaults to HTTP elasticsearch output which means that the
    elasticsearch output (no _http) does HTTP and a new elasticsearch_java
    output exists if you want to continue doing the native api output. We
    had been doing HTTP so just need to update the output name. The host
    parameter is also deprecated and you must pass an array to the hosts
    parameter instead so update that as well.
    Note that this switches from using a local ES daemon to talking to the
    cluster itself directly because new logstash is able to load balance
    over http. This reduces the overhead necessary to have resilient ES
    Note this is not compatbile with Logstash 1.3.3 which is what we are
    currently running so this change should only go in as part of an upgrade
    to Logstash 2.0 and beyond.
    Change-Id: I788ecb936f9fa5a006332ed626f90c33a255d9bf
    cboylan committed Nov 19, 2015
Commits on Mar 4, 2016
  1. @cboylan

    Use osic-cloud1

    Due to the possibility for multiple OSIC clouds we need to distinguish
    between them in our clouds.yaml. Do that now before it becomes a problem
    later and refer to the current cloud as osic-cloud1.
    Change-Id: I3f35db2911a44200f0486e71fc215d021aa7c227
    cboylan committed Mar 4, 2016
  2. @cboylan

    Verify SSL with OSIC

    OSIC now has DNS configured with a proper verifiable certificate so use
    Change-Id: If09d716b8aa466678fffd5bdddc176fbaaf7b949
    cboylan committed Mar 4, 2016
Commits on Mar 1, 2016
  1. @cboylan

    Add vexxhost cloud credentials

    Add vexxhost account credentials to our various clouds.yaml files. This
    covers the all clouds, ansible, and nodepool clouds.yaml files. With
    this in place we can work to deploying tests onto vexxhost.
    Change-Id: I42101e9acc9f62897a3f63b85dd34a14adcf2394
    cboylan committed Mar 1, 2016
  2. @cboylan

    Use project_name not _id with OSIC

    Project names are easier for humans to deal with use the project_name
    key in clouds.yaml for OSIC not the project_id key.
    Change-Id: I15b6424e355c711941a43e78116ffb71f6647cb7
    cboylan committed Mar 1, 2016
  3. @cboylan

    Add OSIC clouds.yaml details

    This adds clouds.yaml information to our three clouds.yaml files for our
    two users in the OSIC cloud. This will let us manage the OSIC cloud
    resources and start deploying tests to OSIC with nodepool.
    Change-Id: I5a392d165fb6db2e70036008a55cd99eed237ab4
    cboylan committed Feb 26, 2016
Commits on Feb 24, 2016
  1. @cboylan

    Support config drive when using shade-launch-node

    Some clouds may not have a metadata service and need to retrieve key
    info via config drive. Add a flag to specifically request that a config
    drive is provided to the instance booted by nova to facilitate this
    information injection.
    Change-Id: Ic41df5b34ea67ad62949244e064db82410077453
    cboylan committed Feb 24, 2016
Commits on Feb 23, 2016
  1. @cboylan

    Put infracloud cert next to nodepool clouds.yaml

    We are installing a cert to trust the infracloud but were trying to put
    it in a dir that does not exist. Put it next to the clouds.yaml in
    ~nodepool/.config/openstack as that will exist because nodepool consumes
    clouds.yaml from there.
    Change-Id: I27e1a1d340e9864308c89c660ae014d7110fbe9f
    cboylan committed Feb 23, 2016
  2. @cboylan

    Clean up infracloud clouds.yaml to actually work

    This uses the correct infra domain name, changes domain keys from id to
    name, a fixes indentation for various keys.
    Change-Id: Ic8a8f67bc2586ca640b8c3e500f6cdad1abf0ebd
    cboylan committed Feb 23, 2016
  3. @cboylan

    Write all-clouds.yaml to disk

    We have had an all-clouds.yaml file that was not being managed on disk
    by puppet. Actually apply it to disk so that the template ends up on the
    puppetmaster as expected.
    Change-Id: I0136cab7c03b1932be5b24ff2e93ea8adb84c20d
    cboylan committed Feb 23, 2016
Commits on Jan 29, 2016
  1. @cboylan

    Provide separate nodepool builder log config

    Now that the nodepool builder is running as a separate daemon it needs
    its own log config file. Move the auto generated nodepool logging config
    stuff over to the new builder logging config as we can manage the main
    daemon's logging config by hand trivially now.
    Depends-On: I013835621dfbc311a0f7bd7c957b7d4656dfa628
    Change-Id: Ic1da30eab949876e5bd6c88e83979bdedc6dd50a
    cboylan committed Jan 29, 2016
Commits on Dec 10, 2015
  1. @cboylan

    Add doc on using jenkins restart playbook.

    Add documentation on how to run the jenkins restart playbook against a
    specific jenkins master. This is useful if a jenkins master starts to
    leak threads before its weekly restart.
    Change-Id: Ib5163589c1c83e4fcb7493daa387f42cda02bc9d
    cboylan committed Dec 10, 2015
Commits on Dec 4, 2015
  1. @cboylan

    Remove centos6 support

    This removes centos6 support from because centos6 is
    no longer supported so this code is dead.
    Change-Id: If59f10a6a9c576b1299b0e49a2e82d2a1a1d7ecf
    cboylan committed Dec 4, 2015
Commits on Nov 23, 2015
  1. @cboylan

    Don't write haproxy logs to /var/log/messages

    haproxy logs are plentiful and pollute /var/log/messages. We are already
    writing these logs to /var/log/haproxy.log so prevent writing to
    messages as well with an & stop directive after writing to
    Change-Id: I6a388f347c0189425a12f7f0df8593ca757e9090
    cboylan committed Nov 23, 2015
Commits on Nov 20, 2015
  1. @cboylan

    Allow haproxy to bind to all ports in selinux

    By default haproxy can only bind to HTTP(S) ports all other ports can't
    be bound due to the selinux policy. Simple fix for this is to toggle the
    boolean that allows haproxy to bind any port in the selinux policy. Do
    this with an exec that first checks if the boolean is set.
    Change-Id: I49c8bdc3586fa82cd954a6ef9be27f48f9a623ec
    cboylan committed Nov 20, 2015
Commits on Nov 13, 2015
  1. @cboylan

    Set indices.breaker.fielddata.limit to 70% on ES

    We had to bump indices.breaker.fielddata.limit to 70% against the
    running cluster in order for the elastic-recheck queries to run without
    erroring due to not enough memory for the @timestamp field. Add this to
    the config so that it persists across cluster installs.
    Change-Id: Ia2f9c2ffff166bf3cc8f32c90b230249b3392406
    Depends-On: I46c0cb5157aae40a0029ff1b425ecc663d171768
    cboylan committed Nov 13, 2015
Commits on Nov 12, 2015
  1. @cboylan

    Allow CORS against elasticsearch API

    Since we do fine grained access control to the API via a proxy that
    limits what requests can be made from anywhere go ahead and allow CORS
    requests from anywhere as they won't be able to write data anyways.
    This is useful so that you can test a new version of $tool hosted
    locally against the actual cluster or for admins to run admin tools
    hosted locally against the cluster.
    Change-Id: I774d0ad0b246315794ab387acc39c41c7cfac3cd
    Depends-On: I0aa8d5167c770c1024b7596da582d6cc089b1b47
    cboylan committed Nov 12, 2015
Commits on Nov 9, 2015
  1. @cboylan

    Update ES to 1.7.3

    The actual upgrade is handled by ansible but this will enforce the
    correct version for any new host builds. This should be merged after the
    ansible managed upgrade is complete.
    Change-Id: Ieda9cdffe7486d24f17012de5fc094d18a45ec78
    cboylan committed Nov 9, 2015