This repo represents a two tier sytem that is composed of Strapi, a headless CMS
, and MongoDB, a noSQL database. Both components have been architected for High Availability.
- Strapi: 3 replica deployment config
- Mongo: 3 replica statefulset (mongo replicaset)
My thought process was to create infrastruce and its automation that abstracted away operational specifics away into playbooks. In addition all infrastructure was decoupled (for example openshift templates are seperate from ansible playbooks) so that all individual components could be run individually.
This allows:
- devops folks to learn, interpret and run infra code as needed
- CI/CD pipelines to do the same (in the case ansible is not available)
- Developers to do what they do best
Check out the ansible docs for all available playbooks.
To build and deploy strapi perform these steps:
- Open a PR (take not of PR num)
- Build Strapi using the
build-strapi.yaml
playbook - Run the
deploy-mongo.yaml
playbook - Run the
deploy-strapi.yaml
playbook when image is available and mongo is ready
The application build and deployments (onto Openshift) leverage several technologies to streamline the process.
- ansible
ansible
- openshift templates
openshift/templates
Together they deploy an instance of Strapi which is a Opensource headless CMS.
A Running version of the application can be found here http://strapi-pr-5-va3azs-patricksimonian-ocp201-tst-dev.pathfinder.gov.bc.ca/
This application (and its automation) leverages Github Flow
(more info here).
- Open Pull requests are utilzized as points to build and deploy code from
- Pull requests are not closed until they reach a prod like environment (for our cases this would be the dev namespace)
- You will require an OPEN PR in order to utilize the ansible playbooks to build and deploy this application.
There are a heavy use of Openshift Templates. I opted not to leverage tempaltes within ansible jinja templates so that they could be kept seperate and the templates run standalone.
All components have a set of labels to organize seperate PR based deployments as well an easy way to group them up for cleaning.
-
app: identifies a label for a group of objects that belong to a Pull request. This includes the instance name plus a suffix. Such as
strapi-pr-3
andmongo-pr-3
-
group: a generic label to get/delete all versions of an instance. Such as
strapi
andmongo
-
pr: identifies all objects that share the same Pull Request (allows you to grab strapi and mongo objects to gether)
The operational plan describes the process for backing up and testing a restore of the mongo replicaset.
The backup restore process is totally new to me and was indeed a big learning point. My mantra was
make it simple -> make it work -> make it right -> make it better
I comfortably covered up to make it work
. My hope is to transfer the pod template into a cron job. This would require modification of the backup playbook to not delete the pod as a final task.
- Rebuild Strapi Image to latest version based off the build playbook *
- Perform Backup and Restore check of database *
- Redeploy Strapi and Mongo *
- Verify readiness and health of both services
- If Mongo is unhealthy scale both services down
- Restore Mongo
- Bring back up Mongo and Strapi and verify again
- means playbooks exist, a missing * means a playbook needs to be written for these steps
There are two ways that I'd reduce service disruption
- Perform upgrades outside of business hours
- Perform a blue-green deployment (this would be preferred)
- combine strapi and mongo playbooks together
- add readiness checks to strapi