- Deploy ACM to OpenShift using the Operator. This will be your hub cluster.
- Setup managed clusters in ACM (including local cluster). Apply the label
role=feature-candidate
to one cluster, androle=stable
to all other clusters. - Deploy Ansible Tower (tested with 3.7.3), ensuring that the
openshift
google-auth
pip modules are installed. One way to do this would be use my custom task-runner image for an openshift deployment, specifyingkubernetes_task_image: quay.io/akrohg/ansible-task
in thegroup_vars
of your openshift installer. I installed Tower in this way on the ACM Hub cluster. - Edit the
vars
inansible-hook/setup/main.yml
to provide creds and endpoints for Ansible Tower, your Hub cluster, and your spoke clusters. - Setup for Ansible Tower. This performs the following:
- Create an API key for Tower
- Import this repo as a Project for Tower
- Create a Job Template in Tower to run as a post hook to the ACM Application Deployment
- Create a Secret in your hub cluster to authenticate to Tower
- Subscribe to the Ansible Automation Platform Resource Operator in your hub cluster
- Create
ClusterRoleBindings
required for the app service account to read node info and report what cloud it's running on. I previously included this in the app workload itself, but the Reconciliation failed from ACM - i suspect insufficient privileges. - Install Gatekeeper for managing
K8sExternalIPs
objects in all clusters - Install Open Policy Agent for enforcing the external-ips
Policy
- Installs an Apache HTTPD load balancer to your hub cluster. Deploy an Apache HTTPD load balancer to your hub cluster in the acm-demo namespace. This will load balance between app deployments across managed clusters. No apps have been created yet, so this should initially return a 503.
ansible-playbook ansible-hook/setup/main.yml
NOTE: Run all commands against your hub cluster.
- Add the app subscription:
./run-deployment.sh -s
This will apply v1
of the app to your feature-candidate
cluster. The AnsibleJob
should run and update the ConfigMap
of the load balancer to target your newly deployed app. This probably takes about 4 minutes. Refresh the load balancer route in your browser to see that it's now resolving.
- Release
v1
to stable clusters:
./run-deployment.sh -v v1
This will apply v1
of the app to all managed clusters (both feature-candidate
and stable
). A new AnsibleJob
resource should be created which will update the load balancer to distribute traffic among all clusters. This may take another 4 minutes or so - refresh the load balancer a few times to see (based on the zone/provider reported by the app) that traffic distributed using a round robin strategy.
- Go to Govern risk in ACM and enable the external-ips policy. To illustrate control over this vulnerability. You'll notice the
Service
in v1 of the app has anexternalIPs
attribute which triggers a policy violation on all clusters. You can use the Web Terminal to find the affectedService
across all clusters by running:
search kind:Service name:deployment-example
- Release
v2
to your feature candidate cluster to address the vulnerability:
./run-deployment.sh -v v2-alpha
After reconciliation, refreshing on the load balancer should show that your feature candidate cluster has been upgraded to v2
of the app. Notice (after a minute) that the Policy violations decreases by one.
- Release
v2
to stable clusters, to address remaining policy violations:
./run-deployment.sh -v v2