From b0f4033974f8552759cd1179a59aa740ea23955b Mon Sep 17 00:00:00 2001 From: gregharvey Date: Fri, 27 Jan 2023 12:50:31 +0100 Subject: [PATCH 1/8] Better deploy_code role docs. --- docs/roles/deploy_code.md | 80 ++++++++++++++++++++++++++++++++++++- roles/deploy_code/README.md | 80 ++++++++++++++++++++++++++++++++++++- 2 files changed, 158 insertions(+), 2 deletions(-) diff --git a/docs/roles/deploy_code.md b/docs/roles/deploy_code.md index 4e371f28..ca864d81 100644 --- a/docs/roles/deploy_code.md +++ b/docs/roles/deploy_code.md @@ -1,5 +1,83 @@ # Deploy -Step that deploys the codebase. +Step that deploys the codebase. On standalone machines and "static" clusters of web servers (e.g. machines whose addressing never changes) this is reasonably straightforward, the default variables should "just work". This role also supports deployment to autoscaling clusters of web servers, such as AWS autoscaling groups or containerised architecture. More details on that after this section. + +The shell script that wraps Ansible to handle the various build steps has various "stages" and the `deploy_code` role has a set of tasks for each stage. The key one for code building on the static/current cluster servers is [the `deploy.yml` file](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/deploy.yml). Here you will find the steps for checking out and building code on web servers, as well as the loading in of any application specific deploy code, [e.g. these tasks for Drupal 8](https://github.com/codeenigma/ce-deploy/tree/1.x/roles/deploy_code/deploy_code-drupal8/tasks). You choose what extra tasks to load via the `project_type` variable. Current core options are: + +* `drupal7` +* `drupal8` +* `matomo` +* `mautic` +* `simplesamlphp` + +Patches to support other common applications are always welcome! Also, Ansible inheritance being what it is, you can create your own custom deploy role in the same directory as your deployment playbook and Ansible will detect it and make it available to you. For example, if you create `./deploy_code/deploy_code-myapp/tasks/main.yml` relative to your playbook and set `project_type: myapp` in your project variables then `ce-deploy` will load in those tasks. + +# Autoscale deployment +For autoscaling clusters - no matter the underlying tech - the build code needs to be stored somewhere central and accessible to any potential new servers in the cluster. Because the performance of network attached storage (NAS) is often too poor or unreliable, we do not deploy the code to NAS - although this would be the simplest approach. Instead the technique we use is to build the code on each current server in the cluster, as though it were a static cluster or standalone machine, but *also* copy the code to the NAS so it is available to all future machines. This makes the existence of mounted NAS that is attached to all new servers a pre-requisite for `ce-deploy` to work with autoscaling. + +**Important**, autoscale deployments need to be carefully co-ordinated with [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync) so new servers/containers have the correct scripts in place to place their code after they initialise. Specifically, the `mount_sync.tarballs` or `mount_sync.squashed_fs` list variables in `ce-provision` must contain paths that match with the location specified in the `deploy_code.mount_sync` variable in `ce-deploy` so `ce-deploy` copies code to the place `ce-provision`'s `cloud-init` scripts expect to find it. (More on the use of `cloud-init` below.) + +(An aside, we have previously supported S3-like object storage for storing the built code, but given all the applications we work with need to have NAS anyway for end user file uploads and other shared cluster resources, it seems pointless to introduce a second storage mechanism when we have one there already that works just fine.) + +This packaging of a copy of the code all happens in [the `cleanup.yml` file of the role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/cleanup.yml). It supports three options: + +* No autoscale (or AWS AMI-based autoscale - see below) - leave `mount_sync` as an empty string +* `tarball` type - makes a `tar.gz` with the code in and copies it to the NAS +* `squashfs` type - packs a [`squashfs`](https://github.com/plougher/squashfs-tools) image, copies to the NAS and mounts it on each web server + +For both `tarball` and `squashfs` you need to set `mount_type` accordingly and the `mount_sync` variable to the location on your NAS where you want to store the built code. + +## `tarball` builds +This is the simplest method of autoscale deployment, it simply packs up the code and copies it to the NAS at the end of the deployment. Everything else is just a standard "normal" build. + +**Important**, this method is only appropriate if you do not have too many files to deploy. The packing and restoring takes a very long time if there are many small files, so it is not appropriate for things like `composer` built PHP applications. + +### Rolling back +With this method the live code directory is also the build directory, therefore you can edit the code in place in an emergency and "rolling back" if there are issues with a build is just a case of pointing the live build symlink back to the previous build. As long as the `database_backup` is using the `rolling` method then the "roll back" database will still exist and the credentials will be correct in the application. If the backup is `dump` then you will need to inspect [the `mysql_backup.dumps_directory` variable](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/database_backup/database_backup-mysql/defaults/main.yml#L4) to see where the backup was saved in order to restore it. By default this will be on the NAS so it is available to all web servers. + +## `squashfs` builds +Because `tarball` is very slow, we have a second method using [`squashfs`](https://github.com/plougher/squashfs-tools). This filesystem is designed for packing and compressing files into read-only images - initially to deploy to removable media - that can simply be mounted, similar to a macOS Apple Disk Image (DWG) file. It is both faster to pack than a tarball *and* instant to deploy (it's just a `mount` command). + +However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync) + +Consequently, at the build stage there are two important extra variables to set: + +```yaml +deploy_code: + # List of services to manipulate to free the loop device for 'squashfs' builds, post lazy umount. + # @see the squashfs role in ce-provision where special permissions for deploy user to manipulate services get granted. + services: [] + # services: + # - php8.0-fpm + # What action to take against the services, 'reload' or 'stop'. + # Busy websites will require a hard stop of services to achieve the umount command. + service_action: reload +``` + +`services` is a list of Linux services to stop/reload in order to ensure the mount point is not locked. Usually this will be your PHP service, e.g. + +```yaml +deploy_code: + services: + - php8.1-fpm +``` + +`service_action` is whether `ce-deploy` should restart the services in the list of stop them, unmount and remount the image and start them again. The latter is the only "safe" way to deploy, but results in a second or two of down time. + +Finally, as with the `tarball` method, the packed image is copied up to the NAS to be available to all future servers and is always named `deploy.sqsh`. The previous codebase is *also* packed and copied to the NAS, named `deploy_previous.sqsh` in the same directory. + +### Rolling back +Rolling back from a bad `squashfs` build means copying `deploy_previous.sqsh` down from the NAS to a sensible location in the `ce-deploy` user's home directory, unmounting the current image and mounting `deploy_previous.sqsh` in its place. + +Same as with the `tarball` method, as long as the `database_backup` is using the `rolling` method then the "roll back" database will still exist and the credentials will be correct in the `deploy_previous.sqsh` image. Again, if the backup method is `dump` then you will need to inspect [the `mysql_backup.dumps_directory` variable](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/database_backup/database_backup-mysql/defaults/main.yml#L4) to see where the backup was saved in order to restore it. + +Emergency code changes are possible but more fiddly. You have to copy the codebase from the mount to a sensible, *writeable* location, make your changes, [use the `squashfs` command to pack a new image](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/cleanup.yml#L54), mount that image and, crucially, replace the `deploy.sqsh` image file on the NAS with your new image so future autoscale events will pick it up. + +# Autoscaling events +Deploying code with autoscaling clusters relies on [cloud-init](https://cloudinit.readthedocs.io/) and is managed in our stack by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync). Whenever a new server spins up in a cluster, the `cloud-init` run-once script put in place by `ce-provision` is executed and that copies down the code from the NAS and deploys it to the correct location on the new server. At that point the server should become "healthy" and start serving the application. + +# AMI-based autoscale +**This is experimental.** We are heavily based on [GitLab CE](https://gitlab.com/rluna-gitlab/gitlab-ce) and one of the options we support with [our provisioning tools](https://github.com/codeenigma/ce-provision/tree/1.x) is packing an [AWS AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) with the code embedded within, thus no longer requiring the [cloud-init](https://cloudinit.readthedocs.io/) step at all. [We call this option `repack` and the code is here.](https://github.com/codeenigma/ce-provision/blob/1.x/roles/aws/aws_ami/tasks/repack.yml) This makes provisioning of new machines in a cluster a little faster than the `squashfs` option, but requires the ability to trigger a build on our infrastructure `controller` server to execute a cluster build and pack the AMI. That is what the `api_call` dictionary below is providing for. You can see the API call constructed in [the last task of `cleanup.yml`](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/cleanup.yml#L205). + diff --git a/roles/deploy_code/README.md b/roles/deploy_code/README.md index 4e371f28..ca864d81 100644 --- a/roles/deploy_code/README.md +++ b/roles/deploy_code/README.md @@ -1,5 +1,83 @@ # Deploy -Step that deploys the codebase. +Step that deploys the codebase. On standalone machines and "static" clusters of web servers (e.g. machines whose addressing never changes) this is reasonably straightforward, the default variables should "just work". This role also supports deployment to autoscaling clusters of web servers, such as AWS autoscaling groups or containerised architecture. More details on that after this section. + +The shell script that wraps Ansible to handle the various build steps has various "stages" and the `deploy_code` role has a set of tasks for each stage. The key one for code building on the static/current cluster servers is [the `deploy.yml` file](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/deploy.yml). Here you will find the steps for checking out and building code on web servers, as well as the loading in of any application specific deploy code, [e.g. these tasks for Drupal 8](https://github.com/codeenigma/ce-deploy/tree/1.x/roles/deploy_code/deploy_code-drupal8/tasks). You choose what extra tasks to load via the `project_type` variable. Current core options are: + +* `drupal7` +* `drupal8` +* `matomo` +* `mautic` +* `simplesamlphp` + +Patches to support other common applications are always welcome! Also, Ansible inheritance being what it is, you can create your own custom deploy role in the same directory as your deployment playbook and Ansible will detect it and make it available to you. For example, if you create `./deploy_code/deploy_code-myapp/tasks/main.yml` relative to your playbook and set `project_type: myapp` in your project variables then `ce-deploy` will load in those tasks. + +# Autoscale deployment +For autoscaling clusters - no matter the underlying tech - the build code needs to be stored somewhere central and accessible to any potential new servers in the cluster. Because the performance of network attached storage (NAS) is often too poor or unreliable, we do not deploy the code to NAS - although this would be the simplest approach. Instead the technique we use is to build the code on each current server in the cluster, as though it were a static cluster or standalone machine, but *also* copy the code to the NAS so it is available to all future machines. This makes the existence of mounted NAS that is attached to all new servers a pre-requisite for `ce-deploy` to work with autoscaling. + +**Important**, autoscale deployments need to be carefully co-ordinated with [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync) so new servers/containers have the correct scripts in place to place their code after they initialise. Specifically, the `mount_sync.tarballs` or `mount_sync.squashed_fs` list variables in `ce-provision` must contain paths that match with the location specified in the `deploy_code.mount_sync` variable in `ce-deploy` so `ce-deploy` copies code to the place `ce-provision`'s `cloud-init` scripts expect to find it. (More on the use of `cloud-init` below.) + +(An aside, we have previously supported S3-like object storage for storing the built code, but given all the applications we work with need to have NAS anyway for end user file uploads and other shared cluster resources, it seems pointless to introduce a second storage mechanism when we have one there already that works just fine.) + +This packaging of a copy of the code all happens in [the `cleanup.yml` file of the role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/cleanup.yml). It supports three options: + +* No autoscale (or AWS AMI-based autoscale - see below) - leave `mount_sync` as an empty string +* `tarball` type - makes a `tar.gz` with the code in and copies it to the NAS +* `squashfs` type - packs a [`squashfs`](https://github.com/plougher/squashfs-tools) image, copies to the NAS and mounts it on each web server + +For both `tarball` and `squashfs` you need to set `mount_type` accordingly and the `mount_sync` variable to the location on your NAS where you want to store the built code. + +## `tarball` builds +This is the simplest method of autoscale deployment, it simply packs up the code and copies it to the NAS at the end of the deployment. Everything else is just a standard "normal" build. + +**Important**, this method is only appropriate if you do not have too many files to deploy. The packing and restoring takes a very long time if there are many small files, so it is not appropriate for things like `composer` built PHP applications. + +### Rolling back +With this method the live code directory is also the build directory, therefore you can edit the code in place in an emergency and "rolling back" if there are issues with a build is just a case of pointing the live build symlink back to the previous build. As long as the `database_backup` is using the `rolling` method then the "roll back" database will still exist and the credentials will be correct in the application. If the backup is `dump` then you will need to inspect [the `mysql_backup.dumps_directory` variable](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/database_backup/database_backup-mysql/defaults/main.yml#L4) to see where the backup was saved in order to restore it. By default this will be on the NAS so it is available to all web servers. + +## `squashfs` builds +Because `tarball` is very slow, we have a second method using [`squashfs`](https://github.com/plougher/squashfs-tools). This filesystem is designed for packing and compressing files into read-only images - initially to deploy to removable media - that can simply be mounted, similar to a macOS Apple Disk Image (DWG) file. It is both faster to pack than a tarball *and* instant to deploy (it's just a `mount` command). + +However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync) + +Consequently, at the build stage there are two important extra variables to set: + +```yaml +deploy_code: + # List of services to manipulate to free the loop device for 'squashfs' builds, post lazy umount. + # @see the squashfs role in ce-provision where special permissions for deploy user to manipulate services get granted. + services: [] + # services: + # - php8.0-fpm + # What action to take against the services, 'reload' or 'stop'. + # Busy websites will require a hard stop of services to achieve the umount command. + service_action: reload +``` + +`services` is a list of Linux services to stop/reload in order to ensure the mount point is not locked. Usually this will be your PHP service, e.g. + +```yaml +deploy_code: + services: + - php8.1-fpm +``` + +`service_action` is whether `ce-deploy` should restart the services in the list of stop them, unmount and remount the image and start them again. The latter is the only "safe" way to deploy, but results in a second or two of down time. + +Finally, as with the `tarball` method, the packed image is copied up to the NAS to be available to all future servers and is always named `deploy.sqsh`. The previous codebase is *also* packed and copied to the NAS, named `deploy_previous.sqsh` in the same directory. + +### Rolling back +Rolling back from a bad `squashfs` build means copying `deploy_previous.sqsh` down from the NAS to a sensible location in the `ce-deploy` user's home directory, unmounting the current image and mounting `deploy_previous.sqsh` in its place. + +Same as with the `tarball` method, as long as the `database_backup` is using the `rolling` method then the "roll back" database will still exist and the credentials will be correct in the `deploy_previous.sqsh` image. Again, if the backup method is `dump` then you will need to inspect [the `mysql_backup.dumps_directory` variable](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/database_backup/database_backup-mysql/defaults/main.yml#L4) to see where the backup was saved in order to restore it. + +Emergency code changes are possible but more fiddly. You have to copy the codebase from the mount to a sensible, *writeable* location, make your changes, [use the `squashfs` command to pack a new image](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/cleanup.yml#L54), mount that image and, crucially, replace the `deploy.sqsh` image file on the NAS with your new image so future autoscale events will pick it up. + +# Autoscaling events +Deploying code with autoscaling clusters relies on [cloud-init](https://cloudinit.readthedocs.io/) and is managed in our stack by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync). Whenever a new server spins up in a cluster, the `cloud-init` run-once script put in place by `ce-provision` is executed and that copies down the code from the NAS and deploys it to the correct location on the new server. At that point the server should become "healthy" and start serving the application. + +# AMI-based autoscale +**This is experimental.** We are heavily based on [GitLab CE](https://gitlab.com/rluna-gitlab/gitlab-ce) and one of the options we support with [our provisioning tools](https://github.com/codeenigma/ce-provision/tree/1.x) is packing an [AWS AMI](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AMIs.html) with the code embedded within, thus no longer requiring the [cloud-init](https://cloudinit.readthedocs.io/) step at all. [We call this option `repack` and the code is here.](https://github.com/codeenigma/ce-provision/blob/1.x/roles/aws/aws_ami/tasks/repack.yml) This makes provisioning of new machines in a cluster a little faster than the `squashfs` option, but requires the ability to trigger a build on our infrastructure `controller` server to execute a cluster build and pack the AMI. That is what the `api_call` dictionary below is providing for. You can see the API call constructed in [the last task of `cleanup.yml`](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/deploy_code/tasks/cleanup.yml#L205). + From d0a0ed567a699ec1e210c2e041476615616ffed7 Mon Sep 17 00:00:00 2001 From: gregharvey Date: Fri, 27 Jan 2023 12:55:12 +0100 Subject: [PATCH 2/8] roles path error in docs. --- roles/deploy_code/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/roles/deploy_code/README.md b/roles/deploy_code/README.md index ca864d81..3cac2498 100644 --- a/roles/deploy_code/README.md +++ b/roles/deploy_code/README.md @@ -9,7 +9,7 @@ The shell script that wraps Ansible to handle the various build steps has variou * `mautic` * `simplesamlphp` -Patches to support other common applications are always welcome! Also, Ansible inheritance being what it is, you can create your own custom deploy role in the same directory as your deployment playbook and Ansible will detect it and make it available to you. For example, if you create `./deploy_code/deploy_code-myapp/tasks/main.yml` relative to your playbook and set `project_type: myapp` in your project variables then `ce-deploy` will load in those tasks. +Patches to support other common applications are always welcome! Also, Ansible inheritance being what it is, you can create your own custom deploy role in the same directory as your deployment playbook and Ansible will detect it and make it available to you. For example, if you create `./roles/deploy_code/deploy_code-myapp/tasks/main.yml` relative to your playbook and set `project_type: myapp` in your project variables then `ce-deploy` will load in those tasks. # Autoscale deployment For autoscaling clusters - no matter the underlying tech - the build code needs to be stored somewhere central and accessible to any potential new servers in the cluster. Because the performance of network attached storage (NAS) is often too poor or unreliable, we do not deploy the code to NAS - although this would be the simplest approach. Instead the technique we use is to build the code on each current server in the cluster, as though it were a static cluster or standalone machine, but *also* copy the code to the NAS so it is available to all future machines. This makes the existence of mounted NAS that is attached to all new servers a pre-requisite for `ce-deploy` to work with autoscaling. From f71cd2559123ba29df9cec76c207402662a9fc02 Mon Sep 17 00:00:00 2001 From: gregharvey Date: Fri, 27 Jan 2023 14:18:59 +0100 Subject: [PATCH 3/8] roles path error in docs. --- docs/roles/deploy_code.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/roles/deploy_code.md b/docs/roles/deploy_code.md index ca864d81..3cac2498 100644 --- a/docs/roles/deploy_code.md +++ b/docs/roles/deploy_code.md @@ -9,7 +9,7 @@ The shell script that wraps Ansible to handle the various build steps has variou * `mautic` * `simplesamlphp` -Patches to support other common applications are always welcome! Also, Ansible inheritance being what it is, you can create your own custom deploy role in the same directory as your deployment playbook and Ansible will detect it and make it available to you. For example, if you create `./deploy_code/deploy_code-myapp/tasks/main.yml` relative to your playbook and set `project_type: myapp` in your project variables then `ce-deploy` will load in those tasks. +Patches to support other common applications are always welcome! Also, Ansible inheritance being what it is, you can create your own custom deploy role in the same directory as your deployment playbook and Ansible will detect it and make it available to you. For example, if you create `./roles/deploy_code/deploy_code-myapp/tasks/main.yml` relative to your playbook and set `project_type: myapp` in your project variables then `ce-deploy` will load in those tasks. # Autoscale deployment For autoscaling clusters - no matter the underlying tech - the build code needs to be stored somewhere central and accessible to any potential new servers in the cluster. Because the performance of network attached storage (NAS) is often too poor or unreliable, we do not deploy the code to NAS - although this would be the simplest approach. Instead the technique we use is to build the code on each current server in the cluster, as though it were a static cluster or standalone machine, but *also* copy the code to the NAS so it is available to all future machines. This makes the existence of mounted NAS that is attached to all new servers a pre-requisite for `ce-deploy` to work with autoscaling. From 9a56fa280e223b20c5a0e69bcd224e639b1adcac Mon Sep 17 00:00:00 2001 From: gregharvey Date: Fri, 27 Jan 2023 14:22:51 +0100 Subject: [PATCH 4/8] Adding a note about deploy_previous handling for squashfs. --- docs/roles/deploy_code.md | 2 +- roles/deploy_code/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/roles/deploy_code.md b/docs/roles/deploy_code.md index 3cac2498..30d87b11 100644 --- a/docs/roles/deploy_code.md +++ b/docs/roles/deploy_code.md @@ -66,7 +66,7 @@ deploy_code: Finally, as with the `tarball` method, the packed image is copied up to the NAS to be available to all future servers and is always named `deploy.sqsh`. The previous codebase is *also* packed and copied to the NAS, named `deploy_previous.sqsh` in the same directory. ### Rolling back -Rolling back from a bad `squashfs` build means copying `deploy_previous.sqsh` down from the NAS to a sensible location in the `ce-deploy` user's home directory, unmounting the current image and mounting `deploy_previous.sqsh` in its place. +Rolling back from a bad `squashfs` build means copying `deploy_previous.sqsh` down from the NAS to a sensible location in the `ce-deploy` user's home directory, unmounting the current image and mounting `deploy_previous.sqsh` in its place. Once you've done that, to ensure future autoscaling events do not load the bad code, on the NAS you will need to rename `deploy.sqsh` to something else (or delete it entirely if you're sure you don't want it) and rename `deploy_previous.sqsh` as `deploy.sqsh`, so it is used on an autoscale event. Same as with the `tarball` method, as long as the `database_backup` is using the `rolling` method then the "roll back" database will still exist and the credentials will be correct in the `deploy_previous.sqsh` image. Again, if the backup method is `dump` then you will need to inspect [the `mysql_backup.dumps_directory` variable](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/database_backup/database_backup-mysql/defaults/main.yml#L4) to see where the backup was saved in order to restore it. diff --git a/roles/deploy_code/README.md b/roles/deploy_code/README.md index 3cac2498..30d87b11 100644 --- a/roles/deploy_code/README.md +++ b/roles/deploy_code/README.md @@ -66,7 +66,7 @@ deploy_code: Finally, as with the `tarball` method, the packed image is copied up to the NAS to be available to all future servers and is always named `deploy.sqsh`. The previous codebase is *also* packed and copied to the NAS, named `deploy_previous.sqsh` in the same directory. ### Rolling back -Rolling back from a bad `squashfs` build means copying `deploy_previous.sqsh` down from the NAS to a sensible location in the `ce-deploy` user's home directory, unmounting the current image and mounting `deploy_previous.sqsh` in its place. +Rolling back from a bad `squashfs` build means copying `deploy_previous.sqsh` down from the NAS to a sensible location in the `ce-deploy` user's home directory, unmounting the current image and mounting `deploy_previous.sqsh` in its place. Once you've done that, to ensure future autoscaling events do not load the bad code, on the NAS you will need to rename `deploy.sqsh` to something else (or delete it entirely if you're sure you don't want it) and rename `deploy_previous.sqsh` as `deploy.sqsh`, so it is used on an autoscale event. Same as with the `tarball` method, as long as the `database_backup` is using the `rolling` method then the "roll back" database will still exist and the credentials will be correct in the `deploy_previous.sqsh` image. Again, if the backup method is `dump` then you will need to inspect [the `mysql_backup.dumps_directory` variable](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/database_backup/database_backup-mysql/defaults/main.yml#L4) to see where the backup was saved in order to restore it. From a0a8a3ed8084af7469f64eefb321bcf5796e1c44 Mon Sep 17 00:00:00 2001 From: gregharvey Date: Fri, 27 Jan 2023 14:33:53 +0100 Subject: [PATCH 5/8] Reference incorrect role for deploy user sudo perms. --- docs/roles/deploy_code.md | 2 +- roles/deploy_code/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/roles/deploy_code.md b/docs/roles/deploy_code.md index 30d87b11..9a1b4287 100644 --- a/docs/roles/deploy_code.md +++ b/docs/roles/deploy_code.md @@ -37,7 +37,7 @@ With this method the live code directory is also the build directory, therefore ## `squashfs` builds Because `tarball` is very slow, we have a second method using [`squashfs`](https://github.com/plougher/squashfs-tools). This filesystem is designed for packing and compressing files into read-only images - initially to deploy to removable media - that can simply be mounted, similar to a macOS Apple Disk Image (DWG) file. It is both faster to pack than a tarball *and* instant to deploy (it's just a `mount` command). -However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync) +However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `squashfs` role in `ce-provision`](https://github.com/codeenigma/ce-provision/blob/1.x/roles/squashfs) Consequently, at the build stage there are two important extra variables to set: diff --git a/roles/deploy_code/README.md b/roles/deploy_code/README.md index 30d87b11..9a1b4287 100644 --- a/roles/deploy_code/README.md +++ b/roles/deploy_code/README.md @@ -37,7 +37,7 @@ With this method the live code directory is also the build directory, therefore ## `squashfs` builds Because `tarball` is very slow, we have a second method using [`squashfs`](https://github.com/plougher/squashfs-tools). This filesystem is designed for packing and compressing files into read-only images - initially to deploy to removable media - that can simply be mounted, similar to a macOS Apple Disk Image (DWG) file. It is both faster to pack than a tarball *and* instant to deploy (it's just a `mount` command). -However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync) +However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `squashfs` role in `ce-provision`](https://github.com/codeenigma/ce-provision/blob/1.x/roles/squashfs) Consequently, at the build stage there are two important extra variables to set: From ac6d02ad223fc2a02a9429534be9e0122b390169 Mon Sep 17 00:00:00 2001 From: gregharvey Date: Fri, 27 Jan 2023 18:59:29 +0100 Subject: [PATCH 6/8] Minor edits to frontpage README. --- README.md | 5 ++--- roles/deploy_code/README.md | 2 +- 2 files changed, 3 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index 41910905..3046acb1 100644 --- a/README.md +++ b/README.md @@ -1,11 +1,10 @@ # ce-deploy -[![Build Status](https://api.travis-ci.com/codeenigma/ce-deploy.svg?branch=1.x)](https://api.travis-ci.com/codeenigma/ce-deploy.svg?branch=1.x) - A set of Ansible roles and wrapper scripts to deploy (web) applications. + ## Overview The "stack" from this repo is to be installed on a "deploy" server/runner, to be used in conjonction with a CI/CD tool (Jenkins, Gitlab, Travis, ...). -It allows the deploy steps for a given app to be easily customizable at will, and to be stored alongside the codebase of the project. +It allows the deploy steps for a given app to be easily customisable and to be stored alongside the codebase of the project. When triggered from a deployment tool, the stack will clone the codebase and "play" a given deploy playbook from there. diff --git a/roles/deploy_code/README.md b/roles/deploy_code/README.md index 9a1b4287..04a53adb 100644 --- a/roles/deploy_code/README.md +++ b/roles/deploy_code/README.md @@ -37,7 +37,7 @@ With this method the live code directory is also the build directory, therefore ## `squashfs` builds Because `tarball` is very slow, we have a second method using [`squashfs`](https://github.com/plougher/squashfs-tools). This filesystem is designed for packing and compressing files into read-only images - initially to deploy to removable media - that can simply be mounted, similar to a macOS Apple Disk Image (DWG) file. It is both faster to pack than a tarball *and* instant to deploy (it's just a `mount` command). -However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `squashfs` role in `ce-provision`](https://github.com/codeenigma/ce-provision/blob/1.x/roles/squashfs) +However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `squashfs` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/squashfs). Consequently, at the build stage there are two important extra variables to set: From a2cfcefafddf7ef4492683acf60c9ce833dd96dc Mon Sep 17 00:00:00 2001 From: gregharvey Date: Fri, 27 Jan 2023 19:00:19 +0100 Subject: [PATCH 7/8] Rebuilt docs. --- docs/roles/deploy_code.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/roles/deploy_code.md b/docs/roles/deploy_code.md index 9a1b4287..04a53adb 100644 --- a/docs/roles/deploy_code.md +++ b/docs/roles/deploy_code.md @@ -37,7 +37,7 @@ With this method the live code directory is also the build directory, therefore ## `squashfs` builds Because `tarball` is very slow, we have a second method using [`squashfs`](https://github.com/plougher/squashfs-tools). This filesystem is designed for packing and compressing files into read-only images - initially to deploy to removable media - that can simply be mounted, similar to a macOS Apple Disk Image (DWG) file. It is both faster to pack than a tarball *and* instant to deploy (it's just a `mount` command). -However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `squashfs` role in `ce-provision`](https://github.com/codeenigma/ce-provision/blob/1.x/roles/squashfs) +However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `squashfs` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/squashfs). Consequently, at the build stage there are two important extra variables to set: From cc6e58484e3d7754ca8b3a8e5eae72ec512515bc Mon Sep 17 00:00:00 2001 From: gregharvey Date: Fri, 3 Feb 2023 12:39:17 +0100 Subject: [PATCH 8/8] Accidentally overwrote docs change. --- docs/roles/deploy_code.md | 2 +- roles/deploy_code/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/roles/deploy_code.md b/docs/roles/deploy_code.md index 5a91353c..377ca80c 100644 --- a/docs/roles/deploy_code.md +++ b/docs/roles/deploy_code.md @@ -37,7 +37,7 @@ With this method the live code directory is also the build directory, therefore ## `squashfs` builds Because `tarball` is very slow, we have a second method using [`squashfs`](https://github.com/plougher/squashfs-tools). This filesystem is designed for packing and compressing files into read-only images - initially to deploy to removable media - that can simply be mounted, similar to a macOS Apple Disk Image (DWG) file. It is both faster to pack than a tarball *and* instant to deploy (it's just a `mount` command). -However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync) +However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `squashfs` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/squashfs). Consequently, at the build stage there are two important extra variables to set: diff --git a/roles/deploy_code/README.md b/roles/deploy_code/README.md index 5a91353c..377ca80c 100644 --- a/roles/deploy_code/README.md +++ b/roles/deploy_code/README.md @@ -37,7 +37,7 @@ With this method the live code directory is also the build directory, therefore ## `squashfs` builds Because `tarball` is very slow, we have a second method using [`squashfs`](https://github.com/plougher/squashfs-tools). This filesystem is designed for packing and compressing files into read-only images - initially to deploy to removable media - that can simply be mounted, similar to a macOS Apple Disk Image (DWG) file. It is both faster to pack than a tarball *and* instant to deploy (it's just a `mount` command). -However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `mount_sync` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/mount_sync) +However, the build process is more complex. Because mounted `squashfs` images are read only, we cannot build over them as we do in other types of build. [We alter the build path variables in the `_init` role](https://github.com/codeenigma/ce-deploy/blob/1.x/roles/_init/tasks/main.yml#L25) so the build happens in a separate place and then in the `cleanup.yml` we pack the built code into an image ready to be deployed. Again, because the images are read-only mounts, the live site needs to be *unmounted* with an `umount` command and then remounted with a `mount` command to be completely deployed. This requires the `ce-deploy` user to have extra `sudo` permissions, which is handled by [the `squashfs` role in `ce-provision`](https://github.com/codeenigma/ce-provision/tree/1.x/roles/squashfs). Consequently, at the build stage there are two important extra variables to set: