Using Ruby and Capistrano, deploy a static website to an Amazon S3 website bucket.
Clone or download
Pull request Compare This branch is 141 commits ahead, 5 commits behind voxxit:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
certs
lib/capistrano
spec
.gitignore
.travis.yml
CHANGELOG.md
CONTRIBUTING.md
Gemfile
LICENSE
README.md
Rakefile
capistrano-s3.gemspec

README.md

capistrano-s3

Enables static websites deployment to Amazon S3 website buckets using Capistrano.

Build Status Maintainability Gem Version

Hosting your website with Amazon S3

Amazon S3 provides special websites enabled buckets that allows you to serve web pages from S3.

To learn how to setup your website bucket, see Amazon Documentation.

Getting started

# Gemfile
source 'https://rubygems.org'
gem 'capistrano-s3'

Setup

Install gems with bundle and create your public folder that will be published :

bundle install
mkdir -p public

Gem supports both flavors of Capistrano (2/3). Configurations between versions differ a bit though.

Capistrano 2

First initialise Capistrano for given project - bundle exec capify .

Replace deploy.rb content generated by capify with these simple Amazon S3 configurations:

# config/deploy.rb
require 'capistrano/s3'

set :bucket,            "www.cool-website-bucket.com"
set :access_key_id,     "CHANGETHIS"
set :secret_access_key, "CHANGETHIS"

If you want to deploy to multiple buckets, have a look at Capistrano multistage and configure a bucket per stage configuration.

Capistrano 3

Initialise Capistrano by running - bundle exec cap install

Next add require "capistrano/s3" to Capfile.

Finally, replace deploy.rb content generated by Capistrano with this config:

# config/deploy.rb
set :bucket,            "www.cool-website-bucket.com"
set :access_key_id,     "CHANGETHIS"
set :secret_access_key, "CHANGETHIS"

Deploying

Add content to your public folder and run deploy command:

  • cap deploy (Capistrano 2)

or

  • cap <stage> deploy (Capistrano 3).

Advanced options

Custom region

If your bucket is not in the default US Standard region, set region with:

set :region, 'eu-west-1'

Deployment path

You can set deployment_path to select the local path to deploy relative to the project root. Do not use trailing slash. Default value is: public.

set :deployment_path, 'dist'

Target path

You can also set a remote path relative to the bucket root using target_path. Do not use trailing slash. Default value is empty (bucket root).

set :target_path, 'app'

Write options

capistrano-s3 sets files :content_type and :acl to public-read, add or override with:

set :bucket_write_options, {
    cache_control: "max-age=94608000, public"
}

See aws-sdk S3Client.put_object doc for all available options.

Redirecting

Use :redirect_options to natively redirect (via HTTP 301 status code) any hosted page. For example:

set :redirect_options, {
  'index.html' => 'http://example.org',
  'another.html' => '/test.html',
}

The redirect_options parameter takes target_path into account, you can use the same paths regardless of its value.

Valid redirect destination should either start with http or https scheme, or begin with leading slash /.

Upload only compressed versions

You can configure capistrano-s3 to only upload gzipped assets (when they are present) and remove the .gz suffix. This feature comes in handy because Amazon S3 does not provide a way to decide when to serve compressed or uncompressed content depending on Accept-Encoding header.

For example: if you have main.js and main.js.gz capistrano-s3 will upload the compressed version as main.js to S3.

Please note:

  1. Only the file is renamed, the original Content-Type, and Content-Encoding: gzip headers will be served
  2. By enabling this feature way only compressed assets will be served. Browser support although is pretty good.

Just add to your configuration:

set :only_gzip, true

CloudFront invalidation

If you set a CloudFront distribution ID (not the URL!) and an array of paths, capistrano-s3 will post an invalidation request. CloudFront supports wildcard invalidations. For example:

set :distribution_id, "CHANGETHIS"
set :invalidations, [ "/index.html", "/assets/*" ]

The CloudFront invalidation feature takes target_path into account. Write your invalidations relatively to your target_path. For example to invalidate everything inside the remote app folder:

set :target_path, "app"
set :distribution_id, "CHANGETHIS"
set :invalidations, [ "/*" ]

If you want to wait until the invalidation batch is completed (e.g. on a CI server), you can run cap <stage> deploy:s3:wait_for_invalidation. The command will wait indefinitely until the invalidation is completed.

Exclude files and directories

You can set a list of files or directories to exclude from upload. The path must relative to deployment_path and use the dir/**/* pattern to exclude directories.

set :exclusions, [ "index.html", "resources/**/*" ]

Example of usage

Our Ruby stack for static websites:

  • sinatra : awesome simple ruby web framework
  • sinatra-assetpack : deals with assets management, build static files into public/
  • sinatra-export : exports all sinatra routes into public/ as html or other common formats (json, csv, etc)

Mixing it in a capistrano task:

# config/deploy.rb
before 'deploy' do
  run_locally "bundle exec ruby sinatra:export"
  run_locally "bundle exec rake assetpack:build"
end

See our boilerplate sinatra-static-bp for an example of the complete setup.

Migration guide

From < 2.0.0

If you have customized deployment_path from 2.0 use a simplified format

# config/deploy.rb
-set :deployment_path, proc { Dir.pwd.gsub('\n', '') + '/build' }
+set :deployment_path, 'build'

If you have configured s3_endpoint to something other than the default switch to new syntax using region identifiers

-set :s3_endpoint, 's3-eu-west-1.amazonaws.com'
+set :region, 'eu-west-1'

Contributing

See CONTRIBUTING.md for more details on contributing and running test.

Credits

hooktstudios

capistrano-s3 is maintained and funded by hooktstudios

Thanks & credits also to all other contributors.