Skip to content

Latest commit

 

History

History
142 lines (109 loc) · 5.4 KB

README.adoc

File metadata and controls

142 lines (109 loc) · 5.4 KB

Ansible Agnostic Deployer

Prerequisites

There are several prerequisites for using this repository, scripted and detailed instructions for usage are available in the following the Preparing Your Workstation document. [estimated effort 5-10 minutes]

Change variables

There are some variables that you need to set:

file: ansible/configs/bu-workshop/env_vars.yml:
  • hosted_zone_id

  • repo_method: "rhn" in case you don’t provide direct repo files

  • env_authorized_key

  • subdomain_base_suffix

  • key_name

  • admin_user_password OpenShift admin password

file: ansible/inventory/ec2.ini:
  • regions= for dynamic inventory to find your desired region

file: Ansible/configs/bu-workshop/env_secret_vars.yml:
  • self explanatory, take a look and set them all.

There are some tunables to size your cluster:

file: ansible/configs/bu-workshop/env_vars.yml:
  • num_nodes

  • user_vols

  • user_vols_size

Standard Configurations

  • Several "Standard Configurations" are included in this repository.

  • A "Standard Configurations" or "Config" are a predefined deployment examples that can be used or copied and modified by anyone.

  • A "Config" will include all the files, templates, pre and post playbooks that a deployment example requires to be deployed.

  • "Config" specific Variable files will be included in the "Config" directory as well.

Note
Until we implement using Ansible Vault, each "Config" has two vars files env_vars.yml and env_secret_vars.yml. example_secret_vars.yml file shows the format for what to put in your CONFIGNAME/env_secret_vars.yml file.

Running the Ansible Playbooks

Once you have installed your prerequisites and have configured all settings and files, simply run Ansible like so:

ansible-playbook -i $PWD/ansible/inventory/ec2.py ansible/main.yml \
  -e "env_type=config-name" \
  -e "aws_region=ap-southeast-2" \
  -e "guid=youruniqueidentifier" \
  -e "cloud_provider=ec2" \
  -e "software_to_deploy=openshift" \
  -e num_nodes=4 \
  -e ANSIBLE_REPO_PATH=$PWD/ansible
Note
Be sure to exchange guid for a sensible prefix of your choosing.

For "opentlc-shared" standard config, check out the README file

Cleanup

  • S3 Bucket

    • An S3 bucket is used to back the Docker registry. AWS will not let you delete a non-empty S3 bucket, so you must do this manually. The aws CLI makes this easy:

      aws s3 rm s3://bucket-name --recursive
    • Your bucket name is named {{ env_type }}-{{ guid }}. So, in the case of a bu-workshop environment where you provided the guid of "Atlanta", your S3 bucket is called bu-workshop-atlanta.

  • CloudFormation Template

    • Just go into your AWS account to the CloudFormation section in the region where you provisioned, find the deployed stack, and delete it.

  • SSH config

    • This Ansible script places entries into your ~/.ssh/config. It is recommended that you remove them once you are done with your environment.

Troubleshooting

Information will be added here as problems are solved. So far it’s pretty vanilla, but quite slow. Expect at least an hour for deployment, if not two or more if you are far from the system(s).

EC2 instability

It has been seen that, on occasion, EC2 is generally unstable. This manifests in various ways:

  • The autoscaling group for the nodes takes an extremely long time to deploy, or will never complete deploying

  • Individual EC2 instances may have terrible performance, which can result in nodes that seem to be "hung" despite being reachable via SSH.

There is not much that can be done in this circumstance besides starting over (in a different region).

Re-Running

While Ansible is idempotent and supports being re-run, there are some known issues with doing so. Specifically:

  • You should skip the tag nfs_tasks with the --skip-tags option if you re-run the playbook after the NFS server has been provisioned and configured. The playbook is not safe for re-run and will fail.

  • You may also wish to skip the tag bastion_proxy_config when re-running, as the tasks associated with this play will re-write the same entries to your SSH config file, which could result in hosts becoming unexpectedly unreachable.