Enables you to deploy your gatsby site to a S3 bucket.
Requires very little configuration, while optimizing your site as much as possible.
- 📦 Fully handles the deployment process for you, all you need to configure is your bucket name.
- Automatically creates/updates bucket with optimal configuration applied.
- Syncs gatsby files to the bucket & updates metadata.
- ⏭ Redirects.
- 💾 Optimizes caching for you.
- ☁️ Optional serverless framework support if you want to take things a step further.
- ✏️ Add your own params to uploaded S3 objects (if you wish).
Install the plugin:
npm i gatsby-plugin-s3
Add it to your gatsby-config.js
& configure the bucket name (required)
plugins: [
{
resolve: `gatsby-plugin-s3`,
options: {
bucketName: 'my-website-bucket'
},
},
]
There are more fields that can be configured, see below.
Add a deployment script to your package.json
"scripts": {
...
"deploy": "gatsby-plugin-s3 deploy"
}
Optionally you can skip the confirmation prompt automatically by adding --yes
like so:
"deploy": "gatsby-plugin-s3 deploy --yes"
When gatsby-plugin-s3
detects a CI environment, it will automatically skip this prompt by default.
After configuring credentials (see below), you can now execute npm run build && npm run deploy
to have your site be build and immediately deployed to S3.
A couple of different methods of specifying credentials exist, the easiest one being using the AWS CLI:
# NOTE: ensure python is installed
pip install awscli
aws configure
If you don't want to have your credentials saved globally (i.e. you're dealing with multiple tokens on the same environment), they can be set as environment variables, for example:
AWS_ACCESS_KEY_ID=xxxx AWS_SECRET_ACCESS_KEY=xxxx npm run deploy
Additionally, these can be set in a local .env
file too, but this requires a bit more setup work. See the recipe here.
Several recipes are available:
Learn how to retrieve AWS credentials from a .env file. Additionally setup a different bucket name depending on your environment.
Learn how to override the content type gatsby-plugin-s3 sets on your files.
CloudFront is a global CDN and can be used to make your blazing fast Gatsby site load even faster, particularly for first-time visitors. Additionally, CloudFront provides the easiest way to give your S3 bucket a custom domain name and HTTPS support.
Serverless can be used in combination with gatsby-plugin-s3, swapping the plugin's deployment step for sls deploy
instead.
Serverless will give you the added advantage of being able to add multiple AWS services such as Lambda, CloudFront, and more all in the same repo, deployment step and CloudFormation stack while still being able to profit from all the optimisations gatsby-plugin-s3 does.
- See the recipe
Bare bones implementation details on how to set up serverless & gatsby-plugin-s3 - See the
with-serverless
example
Routing traffic from gatsby-plugin-s3 during the deployment through a http proxy can be done with a env var.
Using Yandex, DigitalOcean, or any other S3-compliant storage service together with gatsby-plugin-s3
You can deploy your site to a prefix, leaving all other data in the bucket intact.
gatsby-plugin-s3
respects the pathPrefix
gatsby option with no additional setup needed for this plugin, so you can follow the guide in the gatsby docs.
AWS S3 has an undocumented limit on the number of Routing Rules that can be applied to a bucket. Unfortunately this limits
the number of 302 (temporary) redirects you can create. For 301 (permanent) redirects, a way to get around the limit is
setting the x-amz-website-redirect-location
header on an empty object.
To enable this behavior, set the generateRedirectObjectsForPermanentRedirects
configuration option to true
.