Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Workflow for production deploy #120

Merged
merged 5 commits into from
Oct 19, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions .github/workflows/ecs-deploy-dev.yml
Original file line number Diff line number Diff line change
Expand Up @@ -67,8 +67,8 @@ jobs:
uses: docker/build-push-action@v2
with:
builder: ${{ steps.buildx.outputs.name }}
cache-from: type=gha,scope=benefits
cache-to: type=gha,scope=benefits,mode=max
cache-from: type=gha,scope=cal-itp
cache-to: type=gha,scope=cal-itp,mode=max
context: .
push: true
tags: ${{ steps.define-image-paths.outputs.client }}
Expand Down
97 changes: 97 additions & 0 deletions .github/workflows/ecs-deploy-prod.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
name: Deploy to Amazon ECS (prod)

on:
workflow_dispatch:
push:
branches:
- prod

defaults:
run:
shell: bash

jobs:
deploy:
runs-on: ubuntu-latest
environment: prod
concurrency: prod

steps:
- name: Checkout
uses: actions/checkout@v2

- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}

- name: AWS Login to Amazon ECR
id: aws-login-ecr
uses: aws-actions/amazon-ecr-login@v1

- name: Define image paths
id: define-image-paths
env:
ECR_REGISTRY: ${{ steps.aws-login-ecr.outputs.registry }}
GIT_SHA: ${{ github.sha }}
AWS_CLI_TAG: ${{ secrets.AWS_CLI_TAG }}
run: |
echo "::set-output name=client::$ECR_REGISTRY/cal-itp-benefits-client:$GIT_SHA"
echo "::set-output name=config::$ECR_REGISTRY/aws-cli:$AWS_CLI_TAG"

- name: Docker Login to Amazon ECR
id: docker-login-ecr
uses: docker/login-action@v1
with:
registry: ${{ steps.aws-login-ecr.outputs.registry }}
username: ${{ secrets.AWS_ACCESS_KEY_ID }}
password: ${{ secrets.AWS_SECRET_ACCESS_KEY }}

- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1

- name: Build, tag, and push image to Amazon ECR
id: build-client-image
uses: docker/build-push-action@v2
with:
builder: ${{ steps.buildx.outputs.name }}
cache-from: type=gha,scope=cal-itp
cache-to: type=gha,scope=cal-itp,mode=max
context: .
push: true
tags: ${{ steps.define-image-paths.outputs.client }}

- name: Add environment-specific config to ECS task
env:
AWS_ACCOUNT: ${{ secrets.AWS_ACCOUNT }}
AWS_BUCKET: ${{ secrets.AWS_BUCKET }}
AWS_REGION: ${{ secrets.AWS_REGION }}
run: |
.aws/set-env.sh .aws/ecs-task.json

- name: Fill in client image ID in ECS task
id: client-task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: .aws/ecs-task.json
container-name: cal-itp-benefits-client
image: ${{ steps.define-image-paths.outputs.client }}

- name: Fill in config image ID in ECS task
id: config-task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: ${{ steps.client-task-def.outputs.task-definition }}
container-name: cal-itp-benefits-client-config
image: ${{ steps.define-image-paths.outputs.config }}

- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.config-task-def.outputs.task-definition }}
service: cal-itp-benefits-client
cluster: cal-itp-clientCluster
wait-for-service-stability: true
4 changes: 2 additions & 2 deletions .github/workflows/ecs-deploy-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,8 +58,8 @@ jobs:
uses: docker/build-push-action@v2
with:
builder: ${{ steps.buildx.outputs.name }}
cache-from: type=gha,scope=benefits
cache-to: type=gha,scope=benefits,mode=max
cache-from: type=gha,scope=cal-itp
cache-to: type=gha,scope=cal-itp,mode=max
context: .
push: true
tags: ${{ steps.define-image-paths.outputs.client }}
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@ name: Pre-commit checks

on:
push:
branches: [main, test, dev]
branches: [prod, test, dev]
pull_request:
branches: [main, test, dev]
branches: [prod, test, dev]

jobs:
pre-commit:
Expand Down
21 changes: 11 additions & 10 deletions docs/deployment/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,10 +31,10 @@ bucket.

!!! warning

The following command will decrypt and download the `benefits` configuration from S3 into the directory from which it is
run on your local computer. Be sure this is what you want to do.
The following command will decrypt and download the `benefits` configuration from S3 into the `.aws/config` directory on
your local computer. Be sure this is what you want to do.

To replicate the AWS configuration locally, fill in the appropriate values in your local `.env` file:
To copy the AWS configuration locally, fill in the appropriate values in your local `.env` file:

* for the AWS connection:

Expand All @@ -43,32 +43,33 @@ To replicate the AWS configuration locally, fill in the appropriate values in yo
AWS_ACCESS_KEY_ID=access-key-id
AWS_SECRET_ACCESS_KEY=secret-access-key
AWS_BUCKET=bucket-name
CONFIG_FILE=file.json
```

* and to ensure Django looks in the `configvolume` (defined in [docker-compose.yml][docker-compose.yml]):
* and to ensure Django uses the downloaded configuration:

```console
DJANGO_INIT_PATH=config/file.json
DJANGO_INIT_PATH=config/<file>.json
```

and then pull the files down to your local computer:

```bash
docker-compose run s3pull
docker compose run s3pull
```

### Update AWS

!!! warning

The following command will send the **entire contents** of the directory from which it is run on your local computer into
the `benefits` S3 bucket for the configured environment. Be sure this is what you want to do.
The following command will send the **entire contents** of the `.aws/config` directory from your local computer into the
`benefits` S3 bucket for the configured environment. Be sure this is what you want to do.

A Docker Compose service can also be used to push updates to the configuration data into S3 for the given deploy environment:

Ensure you have content (e.g. an `.env` or `config.json` file) inside `.aws/config` in your local repository and then run:

```bash
docker-compose run s3push
docker compose run s3push
```


Expand Down
13 changes: 5 additions & 8 deletions localhost/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ services:
- "8000"
volumes:
- ./data:/home/calitp/app/data:cached
- configvolume:/home/calitp/app/config:ro
- ../.aws/config:/home/calitp/app/config:ro

dev:
build:
Expand Down Expand Up @@ -61,24 +61,24 @@ services:
s3pull:
image: amazon/aws-cli
entrypoint: ["/bin/sh"]
command: ["-c", "aws s3 cp s3://${AWS_BUCKET} ."]
command: ["-c", "aws s3 sync s3://${AWS_BUCKET} ."]
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_DEFAULT_REGION
volumes:
- configvolume:/aws
- ../.aws/config:/aws

s3push:
image: amazon/aws-cli
entrypoint: ["/bin/sh"]
command: ["-c", "aws s3 cp . s3://${AWS_BUCKET}"]
command: ["-c", "aws s3 sync . s3://${AWS_BUCKET}"]
environment:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
- AWS_DEFAULT_REGION
volumes:
- configvolume:/aws
- ../.aws/config:/aws

tests-e2e:
image: cypress/included:8.5.0
Expand All @@ -89,6 +89,3 @@ services:
working_dir: /usr/src/e2e
volumes:
- ../tests/e2e:/usr/src/e2e

volumes:
configvolume: