This project template builds on the deployment principles described in python_app_to_k8s_automated
, but deploys to AWS Elastic Container Services using Fargate instead of Kubernetes/EKS.
-
Clone repo, create new local branch
-
Make desired changes to application
-
Push to new remote branch (git push -u origin ). A new PR can be opened.
-
The test.yml workflow will then execute via GitHub actions (the trigger is a push to any branch apart from main). It will install python, install the dependencies and run pytest via a virtual Ubuntu machine.
-
If the tests pass, the PR can be approved and merged. A second workflow (build_and_deploy.yml) will trigger when it detects a merged PR. This workflow builds the docker image, pushes to AWS ECR, then deploys any application changes or scheduling changes to the task definition in ECS Fargate. Ensure that the ECS cluster is already set up by following the steps below.
-
Create a AWS ECR and manually push a version of the application docker image to it (with a tag of
latest
). Make a note of theECR_REPOSITORY
name. -
Create a cluster in ECS (using Fargate). Make a note of the
ECS_CLUSTER
name. -
Create a task definition in ECS using Fargate or EC2. EC2 mode requires you to create at least one EC2 instance and the tasks are then ran via this instance. In the container definition, point to the image defined above in ECR. Make a note of the container
name
and task definition name (akafamily
). Finnaly, select 'Auto-Configure CloudWatch logs' in the container definition so you can see the output of your containerised application every time it runs in the logs section of CloudWatch. -
Optionally you can create a service in ECS that uses the task definition defined above. This is used for if you want your application/container to run continuously. For this example project, we just want the application to run every hour, and this can be done with a cron-like scheduler instead. If you want to deploy to a service, see the additional deploy steps needed in the yml file.
-
Set the
ECR_REPOSITORY
,ECS_CLUSTER
, andCONTAINER_NAME
to the relevant names within the environment variables section ofbuild_and_deploy.yml
. -
Within
task-definition.json
, set thefamily
param to the task definition name that you made. Set thecontainerDefinitions.name
param to the same asCONTAINER_NAME
. -
Set the cluster
Arn
to yours inscheduledtask.json
. Also change the account id inRoleArn
to yours. You can set theId
to whatever you want. -
Set your task definition ARN as the value for
TaskDefinitionArn
inscheduledtask.json
. If you create new version of the task definition, make sure to use the latest version. -
scheduledtask.json
is used to set up the cron-like scheduled job. You need to changeSubnets
andSecurityGroups
to your values. The easiest way to find these is to manually create a schedule rule in the 'Scheduled Tasks' section of your cluster in the AWS Console, and afterwards copy the subnet and security group values into the json file. The manually created schedule can then be deleted. -
Change the cron schedule to your desired schedule at the bottom of
build_and_deploy.yml
. Note that the syntax is slightly different to cron, see details here. You can also change the rule--name
to whatever you want. There are 2 commands - one for adding a schedule and one for removing. Comment out whatever one is not relevant for you. Note that you need to pass theId
of the schedule if you are turning off. -
Add you
AWS_ACCESS_KEY_ID
,AWS_SECRET_ACCESS_KEY
andAWS_REGION
to the secrets section of this repo (in the settings).
-
If EC2 instances aren't being created when you create a cluster (not relevant for Fargate type) - check the autoscale group section for why they may not have come up. When you create a instance in the cluster create page, you have to select a subnet - different subnets relate to different AZ's and some ec2 instance types aren't available in some AZ's.
-
If the tasks aren't running (either when you execute via cron scheduler or just run them manually) - check CloudTrail logs.