A simple tool to deploy static websites to Amazon S3 with Gzip and custom headers support (e.g. "Cache-Control")
Go Other
Clone or download
Jos and bep Add link to tutorial
Adds link to a blog posts that talks about using s3deploy with CircleCI, which gives some ideas and suggestions for how to approach this.
Latest commit e1969a6 May 20, 2018

README.md

s3deploy

GoDoc Build Status Build status Go Report Card codecov Release

A simple tool to deploy static websites to Amazon S3 with Gzip and custom headers support (e.g. "Cache-Control"). It uses ETag hashes to check if a file has changed, which makes it optimal in combination with static site generators like Hugo.

Install

Pre-built binaries can be found here.

s3deploy is a Go application, so you can also get and build it yourself via go get:

 go get -u -v github.com/bep/s3deploy

Note that s3deploy is a perfect tool to use with a continuous integration tool such as CircleCI. See this static site for a simple example of automated depoloyment of a Hugo site to Amazon S3 via s3deploy. The most relevant files are circle.yml and .s3deploy.yml. For another example, see this tutorial that uses s3deploy with CircleCI.

Use

Usage of s3deploy:
  -V    print version and exit
  -bucket string
        destination bucket name on AWS
  -config string
        optional config file (default ".s3deploy.yml")
  -force
        upload even if the etags match
  -h    help
  -key string
        access key ID for AWS
  -max-delete int
        maximum number of files to delete per deploy (default 256)
  -path string
        optional bucket sub path
  -quiet
        enable silent mode
  -region string
        name of AWS region
  -secret string
        secret access key for AWS
  -source string
        path of files to upload (default ".")
  -try
        trial run, no remote updates
  -v    enable verbose logging
  -workers int
        number of workers to upload files (default -1)

Notes

  • The key and secret command flags can also be set with environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY.
  • The region flag is the AWS API name for the region where your bucket resides. See the table below or the AWS Regions documentation file for an up-to-date version.
Bucket region API value Bucket region API value
Canada (Central) ca-central-1 Asia Pacific (Mumbai) ap-south-1
US East (Ohio) us-east-2 Asia Pacific (Seoul) ap-northeast-2
US East (N. Virginia) us-east-1 Asia Pacific (Singapore) ap-southeast-1
US West (N. California) us-west-1 Asia Pacific (Sydney) ap-southeast-2
US West (Oregon) us-west-2 Asia Pacific (Tokyo) ap-northeast-1
EU (Frankfurt) eu-central-1 China (Beijing) cn-north-1
EU (Ireland) eu-west-1 China (Ningxia) cn-northwest-1
EU (London) eu-west-2
EU (Paris) eu-west-3
South America (São Paulo) sa-east-1

Global AWS Configuration

See https://docs.aws.amazon.com/sdk-for-go/api/aws/session/#hdr-Sessions_from_Shared_Config

The AWS SDK will fall back to credentials from ~/.aws/credentials.

If you set the AWS_SDK_LOAD_CONFIG enviroment variable, it will also load shared config from ~/.aws/config where you can set the global region to use if not provided etc.

Advanced Configuration

Add a .s3deploy.yml configuration file in the root of your site. Example configuration:

routes:
    - route: "^.+\\.(js|css|svg|ttf)$"
      #  cache static assets for 20 years
      headers:
         Cache-Control: "max-age=630720000, no-transform, public"
      gzip: true
    - route: "^.+\\.(png|jpg)$"
      headers:
         Cache-Control: "max-age=630720000, no-transform, public"
      gzip: false
    - route: "^.+\\.(html|xml|json)$"
      gzip: true   

Example IAM Policy

{
   "Version": "2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Action":[
            "s3:ListBucket",
            "s3:GetBucketLocation"
         ],
         "Resource":"arn:aws:s3:::<bucketname>"
      },
      {
         "Effect":"Allow",
         "Action":[
            "s3:PutObject",
            "s3:PutObjectAcl",
            "s3:DeleteObject"
         ],
         "Resource":"arn:aws:s3:::<bucketname>/*"
      }
   ]
}

Replace with your own.

Background Information

If you're looking at s3deploy then you've probably already seen the aws s3 sync command - this command has a sync-strategy that is not optimised for static sites, it compares the timestamp and size of your files to decide whether to upload the file.

Because static-site generators can recreate every file (even if identical) the timestamp is updated and thus aws s3 sync will needlessly upload every single file. s3deploy on the other hand checks the etag hash to check for actual changes, and uses that instead.

Alternatives

  • go3up by Alexandru Ungur
  • s3up by Nathan Youngman (the starting-point of this project)