Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


This project is licensed under the BSD 3-clause “New” or “Revised” License. For more information please see the licence file


The purpose of this project is to take advantage of the AWS free tier, specifically T2.Micro EC2 instances at this stage. The project in it's current state sets up a T2.Micro instance, along with a reasonable security profile, pushes a single docker container to the instance and runs it.

There is no reason why you couldn't setup many AWS accounts and run aws-docker-host against all of them, giving you many free EC2 hosts packed full of running Docker containers.

Next step is to push many containers to the same single instance, I haven't needed this yet, but probably will soon, which is pretty much just a matter of iterating on the aws_instance resource (as commented in the code).

aws-docker-host also adds or modifies an existing CloudFlare A record once your new instance is up and has its elastic IP, so that your DNS keeps pointing to the new instance each time terraform apply is run.

No need to store your Docker images in a registry, terraform apply builds your app or service image locally, tars the image, pushes image to the new EC2 instance, installs the same version of Docker that you tested locally, loads the Docker image, runs the image as the container name you specify with --restart=unless-stopped so that you can be sure your app will always be running.


For deployment of a docker host using Terraform. Go through this up front, then just before free tier period (one year) runs out, use the following again. Some steps can be left out if the free period has not yet run out, like there is nothing to stop you setting this infrastructure up (terraform plan -> terraform apply) and tearing it down (terraform destroy) repeatedly. Just keep in mind, that every time an instance is started it's classed as an instance hour, and you only get 750 hours per month per account, nothing to stop you setting up many accounts though:

Make sure Terraform is installed:

  1. Download the zip
  2. Verify the checksum
  3. Extract to a place on your path:
  • sudo unzip ~/Downloads/ -d /opt/
  • sudo ln -s /opt/terraform /usr/bin/terraform

Make sure Docker is installed locally, ideally with same version that was last used to setup the infrastructure (detailed in aws-docker-host/tf/ tested_docker_version).

  1. Create gmail account ( for example

    • Setup 2FA and Backup codes
  2. Create AWS account (root user)

    • Use dummy info
  3. Create AWS IAM user terraformedawsdocker from root account

    • This account has AdministratorAccess managed policy
    • This account had cli turned on, so I could run Terraform initially, cli is not needed on this user as I now use a lower privilged user (see next step), and cli key is deactivated. When you crate this user, you can deactivate cli key
  4. Create AWS IAM user terraformedawsdocker-cli from terraformedawsdocker user above

    • CLI access only
    • Created group docker<n>-cli
    • Added Policies: AmazonEC2FullAccess and IAMFullAccess (need for IAM role we add in aws-docker-host/tf/iam/

    This user runs the terraform [ plan | apply | destroy]. Access key id and secret access key need to be updated in aws-docker-host/tf/ access_key and secret_key
    While you're there, update the cloudflare_email and cloudflare_token if they have changed (unlikely), along with any other values in the file, also make sure you have generated your SSH key-pair as I explain here in my book. Just rename the variables_override-example to add your configurations, and chmod 600, keep it safe and don't commit it to source control

  5. terraform init will be needed on any new development machine to load modules

    • Then it's just a matter of terraform plan -> terraform apply, and terraform destroy if you want to have another go.

Check CloudFlare record to satisfy yourself that the new elastic IP has been added as the A record for your domain.

If this is the second time you're doing this with a new AWS account, once you have a successful deployment:

  1. Upgrade Terraform to latest version
  2. Upgrade Docker to latest stable version
  3. Deploy again -> test

When happy, turn services off on last years AWS account, and bingo, you have another free year, with very little work.

For ad-hoc or continuous deployment of individual containers

For example, once you have already ran terraform apply and your infrastructure is setup, you can very easily just run terraform destroy -> terraform plan -> terraform apply again, but if you have other containers running on your free EC2 instance, this is overkill, so I usually just do the following:

build image:

docker build --tag <image_name>

Test locally:

docker run -e "NODE_ENV=development" -p 80:3000 -it --name <container_name> <image_name>
# gives the container 10 seconds to stop it's service gracefully, if it doesn't, then it is forcefully stopped.
docker stop <container_name>
docker start <container_name>

You can deploy with a set of scripts similar to the following:



# The following commands should be similar to those run in terraform.

if [ -e "$remote_tarred_image_path" ] ; then  
  echo "Stopping container: $container_name"
  echo "$(docker stop $container_name)"
  echo "Removing container: $container_name"
  echo "$(docker rm $container_name)"
  echo "Loading image from: $remote_tarred_image_path"
  echo "$(docker load -i $remote_tarred_image_path)"
  echo "Running container: $container_name from image: $image_name"
  echo "$(docker run -e \"NODE_ENV=production\" -p 80:3000 -d --restart=unless-stopped --name $container_name $image_name)"
  echo "The tarred image: $remote_tarred_image_path does not exist. No upgrade performed."

echo "Waiting 5 seconds..."
sleep 5
echo "Currently running containers..."
echo "$(docker ps -a)"
rm $remote_tarred_image_path 2>/dev/null



# Check aws-docker-host for settings.
# Clean out docker images from time to time on targetServer.

readonly instance_user="<user name>"
readonly docker_image_name="<your image name>"
readonly image_app_content="<where your source is>"
readonly local_tarred_docker_image_path="<source path of your tarred image>"
readonly targetServer="<address of existing elastic IP>"
readonly sshPort="<ssh port of target instance>"
readonly private_key_file_path="/home/you/.ssh/id_rsa"
readonly remote_tarred_docker_image_path="<target path of your tarred image>"
readonly docker_container_name="<what ever you want to tag your container as>"

build_and_tar_image() {
   docker build --tag $docker_image_name $image_app_content
   echo "Tarring $docker_image_name"
   docker save -o $local_tarred_docker_image_path $docker_image_name
   echo "docker save produced file: $(ls $local_tarred_docker_image_path -liah)"

scp_tarred_img() {
   #scp -i $private_key_file_path -qv -P $sshPort $local_tarred_docker_image_path $instance_user@$targetServer:~ 2>&1 | grep -v debug
   # Rsync gives us nice progress meter.
   echo "Rsync copying file..."
   rsync -v --progress -e "ssh -i $private_key_file_path -p $sshPort" $local_tarred_docker_image_path $instance_user@$targetServer:$remote_tarred_docker_image_path 
   declare -r result=$?
   return $result

remote_host_work() {
   echo "Performing remote host work now ..."
   declare -r result=$(/usr/bin/ssh $instance_user@$targetServer -p $sshPort \
      remote_tarred_image_path=$remote_tarred_docker_image_path \
      container_name=$docker_container_name \
      image_name=$docker_image_name \
      'bash -s' < "./remoteWork")
   echo "$result"


Now just run the following once you've turned the executable bit on:



🚚 Use Terraform to pack a sinlge free AWS EC2 instance with Docker containers




No releases published

Sponsor this project



No packages published