Skip to content


Subversion checkout URL

You can clone with
Download ZIP
Manage an S3 website: sync, deliver via CloudFront, benefit from advanced S3 website features.
Scala Ruby Shell
Fetching latest commit...
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.

Deploy your website to S3

Build Status Gem Version

What s3_website can do for you

  • Create and configure an S3 website for you
  • Upload your static website to AWS S3
    • Jekyll and Nanoc are automatically supported
  • Help you use AWS Cloudfront to distribute your website
  • Improve page speed with HTTP cache control and gzipping
  • Set HTTP redirects for your website
  • (for other features, see the documentation below)


gem install s3_website


Here's how you can get started:

  • In AWS IAM, create API credentials that have sufficient permissions to S3
  • Go to your website directory
  • Run s3_website cfg create. This generates a configuration file called s3_website.yml.
  • Put your AWS credentials and the S3 bucket name into the file
  • Run s3_website cfg apply. This will configure your bucket to function as an S3 website. If the bucket does not exist, the command will create it for you.
  • Run s3_website push to push your website to S3. Congratulations! You are live.

For Jekyll users

Run the s3_website cfg create in the root directory of your Jekyll project. s3_website will automatically look for the site output in the _site directory.

For Nanoc users

Run the s3_website cfg create in the root directory of your Nanoc project. s3_website will automatically look for the site output in the public/output directory.

For others

It's a good idea to store the s3_website.yml file in your project's root. Let's say the contents you wish to upload to your S3 website bucket are in my_website_output. You can upload the contents of that directory with s3_website push --site my_website_output.

Using environment variables

You can use ERB in your s3_website.yml file which incorporates environment variables:

s3_id: <%= ENV['S3_ID'] %>
s3_secret: <%= ENV['S3_SECRET'] %>

(If you are using s3_website on an EC2 instance with IAM roles, you can omit the s3_id and s3_secret keys in the config file.)

Project goals

  • Provide a command-line interface tool for deploying and managing S3 websites
    • Create commands such as s3_website push, s3_website cfg create and s3_website cfg apply
  • Let the user have all the S3 website configurations in a file
  • Minimise or remove the need to use the AWS Console
  • Allow the user to deliver the website via CloudFront
  • Automatically detect the most common static website tools, such as Jekyll or Nanoc
  • Be simple to use: require only the S3 credentials and the name of the S3 bucket
  • Let the power users benefit from advanced S3 website features such as redirects, Cache-Control headers and gzip support
  • Be as fast as possible. Do in parallel all that can be done in parallel.
  • Maintain 90% backward compatibility with the jekyll-s3 gem

s3_website attempts to be a command-line interface tool that is easy to understand and use. For example, s3_website --help should print you all the things it can perform. Please create an issue if you think the tool is incomprehensible or inconsistent.

Additional features

Cache Control

You can use the max_age configuration option to enable more effective browser caching of your static assets. There are two possible ways to use the option: you can specify a single age (in seconds) like so:

max_age: 300

Or you can specify a hash of globs, and all files matching those globs will have the specified age:

  "assets/*": 6000
  "*": 300

Place the configuration into the file s3_website.yml.

Gzip Compression

If you choose, you can use compress certain file types before uploading them to S3. This is a recommended practice for maximizing page speed and minimizing bandwidth usage.

To enable Gzip compression, simply add a gzip option to your s3_website.yml configuration file:

gzip: true

Note that you can additionally specify the file extensions you want to Gzip (.html, .css, .js, and .txt will be compressed when gzip: true):

  - .html
  - .css
  - .md

Remember that the extensions here are referring to the compiled extensions, not the pre-processed extensions.

Using non-standard AWS regions

By default, s3_website uses the US Standard Region. You can upload your website to other regions by adding the setting s3_endpoint into the s3_website.yml file.

For example, the following line in s3_website.yml will instruct s3_website to push your site into the Tokyo region:

s3_endpoint: ap-northeast-1

The valid s3_endpoint values consist of the S3 location constraint values.

Ignoring files you want to keep on AWS

Sometimes there are files or directories you want to keep on S3, but not on your local machine. You may define a regular expression to ignore files like so:

ignore_on_server: that_folder_of_stuff_i_dont_keep_locally

Reduced Redundancy

You can reduce the cost of hosting your blog on S3 by using Reduced Redundancy Storage:

  • In s3_website.yml, set s3_reduced_redundancy: true
  • All objects uploaded after this change will use the Reduced Redundancy Storage.
  • If you want to change all of the files in the bucket, you can change them through the AWS console, or update the timestamp on the files before running s3_website again

How to use Cloudfront to deliver your blog

It is easy to deliver your S3-based web site via Cloudfront, the CDN of Amazon.

Creating a new CloudFront distribution

When you run the command s3_website cfg apply, it will ask you whether you want to deliver your website via CloudFront. If you answer yes, command will create a CloudFront distribution for you.

Using your existing CloudFront distribution

If you already have a CloudFront distribution that serves data from your website S3 bucket, just add the following line into the file s3_website.yml:

cloudfront_distribution_id: your-dist-id

Next time you run s3_website push, it will invalidate the items on CloudFront and thus force the CDN system to reload the changes from your website S3 bucket.

Specifying custom settings for your CloudFront distribution

s3_website lets you define custom settings for your CloudFront distribution.

For example, like this you can define a your own TTL and CNAME:

    min_TTL: <%= 60 * 60 * 24 %>
    quantity: 1

Once you've saved the configuration into s3_website.yml, you can apply them by running s3_website cfg apply.

The headless mode

s3_website has a headless mode, where human interactions are disabled.

In the headless mode, s3_website will automatically delete the files on the S3 bucket that are not on your local computer.

Enable the headless mode by adding the --headless argument after s3_website.

Configuring redirects on your S3 website

You can set HTTP redirects on your S3 website in two ways. If you only need simple "301 Moved Premanently" redirects for certain keys, use the Simple Redirects method. Otherwise, use the Routing Rules method.

Simple Redirects

For simple redirects s3_website uses Amazon S3's x-amz-website-redirect-location metadata. It will create zero-byte objects for each path you want redirected with the appropriate x-amz-website-redirect-location value.

For setting up simple redirect rules, simply list each path and target as key-value pairs under the redirects configuration option:

  index.php: /
  about.php: about.html

Routing Rules

You can configure more complex redirect rules by adding the following configuration into the s3_website.yml file:

  - condition:
      key_prefix_equals: blog/some_path
      replace_key_prefix_with: some_new_path/
      http_redirect_code: 301

After adding the configuration, run the command s3_website cfg apply on your command-line interface. This will apply the routing rules on your S3 bucket.

For more information on configuring redirects, see the documentation of the configure-s3-website gem, which comes as a transitive dependency of the s3_website gem. (The command s3_website cfg apply internally calls the configure-s3-website gem.)

Using s3_website as a library

By nature, s3_website is a command-line interface tool. You can, however, use it programmatically by calling the same API as the executable s3_website does:

require 's3_website'
is_headless = true
S3Website::Tasks.push('/website/root', '/path/to/your/website/_site/', is_headless)

You can also use a basic Hash instead of a s3_website.yml file:

config = {
  "s3_id"     => YOUR_AWS_S3_ACCESS_KEY_ID,
  "s3_secret" => YOUR_AWS_S3_SECRET_ACCESS_KEY,
  "s3_bucket" => ""
in_headless = true'/path/to/your/website/_site/', config, in_headless)

The code above will assume that you have the s3_website.yml in the directory /path/to/your/website.

Specifying custom concurrency level

By default, s3_website does 25 operations in parallel. An operation can be an HTTP PUT operation against the S3 API, for example.

You can increase the concurrency level by adding the following setting into the s3_website.yml file:

concurrency_level: <integer>

If your site has 100 files, it's a good idea to set the concurrency level to 100. As a result, s3_website will process each of your 100 files in parallel.

If you experience the "too many open files" error, either increase the amount of maximum open files (on Unix-like systems, see man ulimit) or decrease the concurrency_level setting.

Example configurations


Known issues

None. Please send a pull request if you spot any.



s3_website uses Semantic Versioning.


  • Install bundler and run bundle install
  • Run all tests by invoking rake test


We (users and developers of s3_website) welcome patches, pull requests and ideas for improvement.

When sending pull requests, please accompany them with tests. Favor BDD style in test descriptions. Use VCR-backed integration tests where possible. For reference, you can look at the existing s3_website tests.

If you are not sure how to test your pull request, you can ask the gem owners to supplement the request with tests. However, by including proper tests, you increase the chances of your pull request being incorporated into future releases.

Checklist for new features

  • Is it tested?
  • Is it documented in README?
  • Is it mentioned in resources/configuration_file_template.yml?


MIT. See the LICENSE file for more information.


This gem is created by Lauri Lehmijoki. Without the valuable work of Philippe Creux on jekyll-s3, this project would not exist.

Contributors (in alphabetical order)

  • Alan deLevie
  • Cory Kaufman-Schofield
  • Chris Kelly
  • Chris Moos
  • David Michael Barr
  • László Bácsi
  • Mason Turner
  • Michael Bleigh
  • Philippe Creux
  • Shigeaki Matsumura
  • stanislas
  • Trevor Fitzgerald
  • Zee Spencer
Something went wrong with that request. Please try again.