Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhance Deploy with CI/CD Best Practices #10

Merged
merged 1 commit into from
May 14, 2021
Merged

Conversation

metaskills
Copy link
Member

πŸ€·β€β™‚οΈ Why?

As pointed out in rails-lambda/lamby#83 deploys can be slow... especially if you run them from your Mac. In case you were not aware, Mac filesystem performance (docker/roadmap#7) is just horrible no matter how you cut it. This project does not want get into solutions like docker-sync. Especially when we have the ability to promote deploys from CI/CD systems like GitHub Actions. That does mean there is a Lamby development issue on Mac where native filesystem performance would need to be helped. Maybe the Lamby starter could take an opinion on that in the future. We will see.

πŸ‘©β€πŸ’» What?

There are some top level concerns we need to support here. They are, listed in order:

  1. Support simple on-demand (sans CI/CD) deploys from the project.
  2. Easy integrate CI/CD usage with common caching methods easily fitting in.

πŸ” AWS_PROFILE || AWS_(ACCESS_KEY_ID|SECRET_ACCESS_KEY)

Deploys form the developer project's host machine is done leveraging standard AWS credentials and profiles. For most they only have one default but for many targeting others are done via the AWS_PROFILE environment variable. This is why we have a volume share on the ${HOME}/.aws:/root/.aws directory. Computers doing the deploy for us leverage the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

To get both 1 and 2 working together there is a virtual tug of war going on. We want to be able to pass down host environment variables in a super clean way via docker compose. Yet, adding both of these will cause issues because when not present, docker compose will create these as empty strings. Meaning if the key/secret are not present, there is no way for AWS_PROFILE to work.

πŸ‘·πŸ½ How?

Here are some high level points on how we are going to accomplish the goals of this pull request. Detailed code callouts will speak to each:

  • Use a docker compose entrypoint to set/unset AWS environment variables as needed. This script will then call whatever bin wrapper is being requested for us. So doing a ./bin/server essentially calls ./bin/entrypoint ./bin/_server for us. We do this in the compose file so we do not have to apply the --entrypoint to all compose run bin wrapper. BTW, I am super proud of this hack. See this docker compose feature request on where I got the idea. Hooks to run scripts on host before starting any containersΒ docker/compose#6736
  • Add simple GitHub workflows for both test and deploy where deploy is a simple workflow dispatch event. Basically click a button deploy. Both these actions leverage actions/cache to cache Bundler & Yarn installations. Note, we are not treading into Docker caching. We get layer caching on the ECR push automatically and numerous posts (https://dev.to/dtinth/caching-docker-builds-in-github-actions-which-approach-is-the-fastest-a-research-18ei) mention how docker pulls are already fast. If someone wants to demonstrate caching Docker with AWS SAM more I'd like to see it.

πŸ“‰ Benchmarks?

  • ~15m - Deploy from Mac
  • ~5m - Deploy from GitHub Actions
  • ~3m - Deploy from GitHub Actions (with cache)

Guides

There will be some guides changes coming very shortly to outline this in both our Getting Started & Deploy sections. Here are some things we will cover:

  • How to create a basic IAM user for Lambda Deploys and setting secrets for GitHub Actions.

# Demonstrate using good CI/CD practicues. Using GitHub Actions
# here for both a simple test and depploy workflow.
#
cp -r ./lambify/.github "$PROJECT_FOLDER"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adds the entire .github directory with test and deploy workflows to the project. See files below.

@@ -1,7 +1,6 @@

# Lamby
/.aws-sam
/.lamby
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not need the .lamby directory anymore. The reason this was added is that without it, a deploy from you local machine means your development dependencies are hosed since vendor/bundle was for deployment. We now have scripting in place to warn about this and handle it automatically. So as we lean to good CI/CD practices... the exception will be dev host deploys with warnings and caveats.

- name: Environment
run: |
echo "UID=$(id -u)" >> $GITHUB_ENV
echo "GID=$(id -g)" >> $GITHUB_ENV
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may be wondering why this is here. Caching brought up an important issue (sorta) with Linux and Docker. Docker for Mac will automatically keep the user/group for shared volumes in sync. When doing caching for GitHub Actions, I ran into a lot of files and folders that could not be cached due to them being owned by root. The GitHub runner does not have UID and GID environment variables set. So we help it along a bit. Each of the actions will have this and I will point out how this connects with the changes in docker-compose.yml further below. If you are not on Linux, things still work as normal.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WHOOPS! Forgot this critical change with the PR. cf45c3b

path: |
./vendor/bundle
./node_modules
key: {{ "${{ runner.os }}-deploy-${{ hashFiles('Gemfile.lock', 'yarn.lock') }}" }}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the caching part! A recent change to actions/cache now allows multiple paths and it does all the work. Because we got rid of the .lamby directory, the paths are not the same for both.

with:
aws-access-key-id: {{ "${{ secrets.AWS_ACCESS_KEY_ID }}" }}
aws-secret-access-key: {{ "${{ secrets.AWS_SECRET_ACCESS_KEY }}" }}
aws-region: us-east-1
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is where someone can pass secrets down via GitHub. Docs will callout the region config as something that may need to change depending on your needs. There will also be guides on how to setup a deploy user. Linking to this. https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/sam-permissions.html

node_modules \
test \
tmp \
vendor/bundle/ruby/2.7.0/cache
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In order to support caching cleanly for the deploy step, we needed to move the ./bin/_build cleanup to the Dockerfile after the working directory is copied to the container. I know this creates a "layer" but I'm not really worried about it and appreciate the speed bump. Feedback welcome.

name=$(cat $SAM_BUCKET_NAME_FILE)
echo $name
fi

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was not needed for this work but I was here and made the change. Basically AWS SAM fixed a bug that required S3 options to be in place even when containers were needed. S3 with SAM is only needed for ZIP package format.

I made a note in this issue (#9) about both but eventually we will be able to drop the ECR in ./bin/_bootstrap too since SAM will support creating that automatically as well.

rm -rf ./.bundle \
./vendor/bundle \
./node_modules
fi
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed all that stuff to create the .lamby directory. Added a simple check when you are not CI/CD doing the deploy to clean up your dev dependencies. See further below on what was needed in ./bin/_setup for the inverse of this work.

node_modules \
test \
tmp \
vendor/bundle/ruby/2.7.0/cache
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is where the cleanup was moved to the Dockerfile so that when the deploy workflow job finished, it would have a good directory to cache.

@@ -1,28 +1,26 @@
#!/bin/sh
set -e

CLOUDFORMATION_BUCKET=$(cat .bucket-name)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See note about S3 and ZIP package type above.

sam package \
--region "$AWS_DEFAULT_REGION" \
--template-file ./.aws-sam/build/template.yaml \
--output-template-file ./.aws-sam/build/packaged.yaml \
--image-repository "$IMAGE_REPOSITORY" \
--s3-bucket "${CLOUDFORMATION_BUCKET}" \
--s3-prefix "{% include "_cctmp/dash_name.txt" %}-${RAILS_ENV}"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bye bye S3 needs.

./vendor/bundle \
./node_modules
fi

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the inverse of what we did in ./bin/_build file. After someone does a deploy from their own machine. They are now required to run bin/setup again. They will likely have pubic/assets to remove as well.


# Execute the original script passed to docker-compose run.
#
eval "$@"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the magic for the docker-compose.yml addition of entrypoint, see below. It basically is a wrapper for our compose run wrapper to fix ENV vars as needed by checking for empty key/secrets.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -4,10 +4,15 @@ services:
build:
context: '.'
dockerfile: Dockerfile-build
entrypoint: ./bin/entrypoint
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks to this docker compose feature request for the idea. docker/compose#6736

environment:
- RAILS_ENV=${RAILS_ENV-development}
- SAM_CLI_TELEMETRY=0
- AWS_PROFILE=${AWS_PROFILE-default}
- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION-us-east-1}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pass all the AWS environment variables for all docker compose run scripts and let the entrypoint clean it up as needed. This also has the upset of allowing developers to use cloud native resources for dev/test via AWS_PROFILE with Dotenv.

- AWS_ACCESS_KEY_ID=${AWS_ACCESS_KEY_ID}
- AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY}
- AWS_DEFAULT_REGION=${AWS_DEFAULT_REGION-us-east-1}
- CI=${CI}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, in case you did not know, GitHub Actions now passes the CI=true now. It used to be we had to set this. This allows it to be passed down to all compose runners.

@@ -23,7 +23,7 @@ Resources:
RailsLambda:
Type: AWS::Serverless::Function
Metadata:
DockerContext: ./.lamby/RailsLambda
DockerContext: .
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing the .lamby directory hacks.

@nitsujri
Copy link

Fantastic stuff! Glad to see the .lamby directory is no longer needed.

@metaskills
Copy link
Member Author

SEE ALSO #11

@metaskills metaskills deleted the DeployOptionsWithCiCd branch June 14, 2021 18:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants