Containerized openshift-ansible to run playbooks
The image is designed to run as a non-root user. The container's UID is mapped to the username
default at runtime. Therefore, the container's environment reflects that user's settings, and the configuration should match that. For example
/opt/app-root/src, so ssh keys are expected to be under
/opt/app-root/src/.ssh. If you ran a container as
root you would have to adjust the container's configuration accordingly, e.g. by placing ssh keys under
/root/.ssh instead. Nevertheless, the expectation is that containers will be run as non-root; for example, this container image can be run inside OpenShift under the default
restricted security context constraint.
Note: at this time there are known issues that prevent to run this image for installation/upgrade purposes (i.e. run one of the config/upgrade playbooks) from within one of the hosts that is also an installation target at the same time: if the playbook you want to run attempts to manage the docker daemon and restart it (like install/upgrade playbooks do) this would kill the container itself during its operation.
A note about the name of the image
The released container images for openshift-ansible follow the naming scheme determined by OpenShift's
imageConfig.format configuration option. This means that the released image name is
openshift/origin-ansible instead of
This provides consistency with other images used by the platform and it's also a requirement for some use cases like using the image from
oc cluster up.
At the very least, when running a container you must specify:
An inventory. This can be a location inside the container (possibly mounted as a volume) with a path referenced via the
INVENTORY_FILEenvironment variable. Alternatively you can serve the inventory file from a web server and use the
INVENTORY_URLenvironment variable to fetch it, or
DYNAMIC_SCRIPT_URLto download a script that provides a dynamic inventory.
ssh keys so that Ansible can reach your hosts. These should be mounted as a volume under
/opt/app-root/src/.sshunder normal usage (i.e. when running the container as non-root).
The playbook to run. This is set using the
PLAYBOOK_FILEenvironment variable. If you don't specify a playbook the
openshift_factsplaybook will be run to collect and show facts about your OpenShift environment.
Here is an example of how to run a containerized
openshift-ansible playbook that will check the expiration dates of OpenShift's internal certificates using the
docker run -u `id -u` \ -v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \ -v /etc/ansible/hosts:/tmp/inventory \ -e INVENTORY_FILE=/tmp/inventory \ -e PLAYBOOK_FILE=playbooks/openshift-checks/certificate_expiry/default.yaml \ -e OPTS="-v" -t \ docker.io/openshift/origin-ansible
You might want to adjust some of the options in the example to match your environment and/or preferences. For example: you might want to create a separate directory on the host where you'll copy the ssh key and inventory files prior to invocation to avoid unwanted SELinux re-labeling of the original files or paths (see below).
Here is a detailed explanation of the options used in the command above:
-u `id -u`makes the container run with the same UID as the current user, which is required for permissions so that the ssh key can be read inside the container (ssh private keys are expected to be readable only by their owner). Usually you would invoke
docker runas a non-root user that has privileges to run containers and leave that option as is.
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Zmounts your ssh key (
$HOME/.ssh/id_rsa) under the
$HOME/.sshin the container (as explained above,
defaultuser in the container). If you mount the ssh key into a non-standard location you can add an environment variable with
-e ANSIBLE_PRIVATE_KEY_FILE=/the/mount/pointor set
ansible_ssh_private_key_file=/the/mount/pointas a variable in the inventory to point Ansible at it.
Note that the ssh key is mounted with the
:Zflag: this is also required so that the container can read the ssh key from its restricted SELinux context; this means that your original ssh key file will be re-labeled to something like
system_u:object_r:container_file_t:s0:c113,c247. For more details about
:Zplease check the
docker-run(1)man page. Please keep this in mind when providing these volume mount specifications because this could have unexpected consequences: for example, if you mount (and therefore re-label) your whole
$HOME/.sshdirectory you will block
sshdfrom accessing your keys. This is a reason why you might want to work on a separate copy of the ssh key, so that the original file's labels remain untouched.
-e INVENTORY_FILE=/tmp/inventorymount the Ansible inventory file into the container as
/tmp/inventoryand set the corresponding environment variable to point at it respectively. The example uses
/etc/ansible/hostsas the inventory file as this is a default location, but your inventory is likely to be elsewhere so please adjust as needed. Note that depending on the file you point to you might have to handle SELinux labels in a similar way as with the ssh keys, e.g. by adding a
:zflag to the volume mount, so again you might prefer to copy the inventory to a dedicated location first.
-e PLAYBOOK_FILE=playbooks/openshift-checks/certificate_expiry/default.yamlspecifies the playbook to run as a relative path from the top level directory of openshift-ansible.
-tmake the output look nicer: the
default.yamlplaybook does not generate results and runs quietly unless we add the
-voption to the
ansible-playbookinvocation, and a TTY is allocated via
-tso that Ansible adds color to the output.
Further usage examples are available in the examples directory with samples of how to use the image from within OpenShift.
Running openshift-ansible as a System Container
Building the System Container: See the BUILD.md.
Copy ssh public key of the host machine to master and nodes machines in the cluster.
If the inventory file needs additional files then it can use the path
/var/lib/openshift-installer in the container as it is bind mounted from the host (controllable with
Run the ansible system container:
atomic install --system --set INVENTORY_FILE=$(pwd)/inventory.origin docker.io/openshift/origin-ansible systemctl start origin-ansible
INVENTORY_FILE variable says to the installer what inventory file on the host will be bind mounted inside the container. In the example above, a file called
inventory.origin in the current directory is used as the inventory file for the installer.
And to finally cleanup the container:
atomic uninstall origin-ansible